00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1067 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3729 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.089 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.090 The recommended git tool is: git 00:00:00.090 using credential 00000000-0000-0000-0000-000000000002 00:00:00.092 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.143 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.214 > git --version # 'git version 2.39.2' 00:00:00.214 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.231 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.622 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.635 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.645 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.645 > git config core.sparsecheckout # timeout=10 00:00:06.657 > git read-tree -mu HEAD # timeout=10 00:00:06.675 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.696 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.696 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.783 [Pipeline] Start of Pipeline 00:00:06.797 [Pipeline] library 00:00:06.799 Loading library shm_lib@master 00:00:06.799 Library shm_lib@master is cached. Copying from home. 00:00:06.814 [Pipeline] node 00:00:06.825 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.826 [Pipeline] { 00:00:06.840 [Pipeline] catchError 00:00:06.842 [Pipeline] { 00:00:06.854 [Pipeline] wrap 00:00:06.862 [Pipeline] { 00:00:06.869 [Pipeline] stage 00:00:06.871 [Pipeline] { (Prologue) 00:00:06.885 [Pipeline] echo 00:00:06.886 Node: VM-host-SM9 00:00:06.890 [Pipeline] cleanWs 00:00:06.899 [WS-CLEANUP] Deleting project workspace... 00:00:06.899 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.904 [WS-CLEANUP] done 00:00:07.155 [Pipeline] setCustomBuildProperty 00:00:07.257 [Pipeline] httpRequest 00:00:07.953 [Pipeline] echo 00:00:07.955 Sorcerer 10.211.164.20 is alive 00:00:07.964 [Pipeline] retry 00:00:07.965 [Pipeline] { 00:00:07.978 [Pipeline] httpRequest 00:00:07.983 HttpMethod: GET 00:00:07.983 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.984 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.010 Response Code: HTTP/1.1 200 OK 00:00:08.011 Success: Status code 200 is in the accepted range: 200,404 00:00:08.011 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.474 [Pipeline] } 00:00:33.491 [Pipeline] // retry 00:00:33.498 [Pipeline] sh 00:00:33.779 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:33.794 [Pipeline] httpRequest 00:00:34.633 [Pipeline] echo 00:00:34.634 Sorcerer 10.211.164.20 is alive 00:00:34.643 [Pipeline] retry 00:00:34.645 [Pipeline] { 00:00:34.658 [Pipeline] httpRequest 00:00:34.663 HttpMethod: GET 00:00:34.663 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:34.663 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:34.670 Response Code: HTTP/1.1 200 OK 00:00:34.671 Success: Status code 200 is in the accepted range: 200,404 00:00:34.671 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:01.689 [Pipeline] } 00:02:01.707 [Pipeline] // retry 00:02:01.715 [Pipeline] sh 00:02:01.994 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:05.297 [Pipeline] sh 00:02:05.576 + git -C spdk log --oneline -n5 00:02:05.576 e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:05.576 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:02:05.576 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:02:05.576 66289a6db build: use VERSION file for storing version 00:02:05.576 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:05.594 [Pipeline] withCredentials 00:02:05.603 > git --version # timeout=10 00:02:05.615 > git --version # 'git version 2.39.2' 00:02:05.630 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:05.632 [Pipeline] { 00:02:05.638 [Pipeline] retry 00:02:05.639 [Pipeline] { 00:02:05.651 [Pipeline] sh 00:02:05.928 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:02:05.938 [Pipeline] } 00:02:05.955 [Pipeline] // retry 00:02:05.960 [Pipeline] } 00:02:05.975 [Pipeline] // withCredentials 00:02:05.985 [Pipeline] httpRequest 00:02:06.328 [Pipeline] echo 00:02:06.330 Sorcerer 10.211.164.20 is alive 00:02:06.339 [Pipeline] retry 00:02:06.340 [Pipeline] { 00:02:06.353 [Pipeline] httpRequest 00:02:06.357 HttpMethod: GET 00:02:06.358 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:06.358 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:06.360 Response Code: HTTP/1.1 200 OK 00:02:06.360 Success: Status code 200 is in the accepted range: 200,404 00:02:06.361 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:10.802 [Pipeline] } 00:02:10.819 [Pipeline] // retry 00:02:10.826 [Pipeline] sh 00:02:11.106 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:13.020 [Pipeline] sh 00:02:13.301 + git -C dpdk log --oneline -n5 00:02:13.301 eeb0605f11 version: 23.11.0 00:02:13.301 238778122a doc: update release notes for 23.11 00:02:13.301 46aa6b3cfc doc: fix description of RSS features 00:02:13.301 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:13.301 7e421ae345 devtools: support skipping forbid rule check 00:02:13.317 [Pipeline] writeFile 00:02:13.332 [Pipeline] sh 00:02:13.615 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:13.627 [Pipeline] sh 00:02:13.908 + cat autorun-spdk.conf 00:02:13.908 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:13.908 SPDK_TEST_NVMF=1 00:02:13.908 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:13.908 SPDK_TEST_URING=1 00:02:13.908 SPDK_TEST_VFIOUSER=1 00:02:13.908 SPDK_TEST_USDT=1 00:02:13.908 SPDK_RUN_UBSAN=1 00:02:13.908 NET_TYPE=virt 00:02:13.908 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:13.908 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:13.908 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:13.935 RUN_NIGHTLY=1 00:02:13.937 [Pipeline] } 00:02:13.950 [Pipeline] // stage 00:02:13.966 [Pipeline] stage 00:02:13.968 [Pipeline] { (Run VM) 00:02:13.981 [Pipeline] sh 00:02:14.270 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:14.270 + echo 'Start stage prepare_nvme.sh' 00:02:14.270 Start stage prepare_nvme.sh 00:02:14.270 + [[ -n 4 ]] 00:02:14.270 + disk_prefix=ex4 00:02:14.270 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:14.270 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:14.270 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:14.270 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.270 ++ SPDK_TEST_NVMF=1 00:02:14.270 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.270 ++ SPDK_TEST_URING=1 00:02:14.270 ++ SPDK_TEST_VFIOUSER=1 00:02:14.270 ++ SPDK_TEST_USDT=1 00:02:14.270 ++ SPDK_RUN_UBSAN=1 00:02:14.270 ++ NET_TYPE=virt 00:02:14.270 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:14.270 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.270 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.270 ++ RUN_NIGHTLY=1 00:02:14.270 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:14.270 + nvme_files=() 00:02:14.270 + declare -A nvme_files 00:02:14.270 + backend_dir=/var/lib/libvirt/images/backends 00:02:14.270 + nvme_files['nvme.img']=5G 00:02:14.270 + nvme_files['nvme-cmb.img']=5G 00:02:14.270 + nvme_files['nvme-multi0.img']=4G 00:02:14.270 + nvme_files['nvme-multi1.img']=4G 00:02:14.270 + nvme_files['nvme-multi2.img']=4G 00:02:14.270 + nvme_files['nvme-openstack.img']=8G 00:02:14.270 + nvme_files['nvme-zns.img']=5G 00:02:14.270 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:14.270 + (( SPDK_TEST_FTL == 1 )) 00:02:14.270 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:14.270 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:14.270 + for nvme in "${!nvme_files[@]}" 00:02:14.270 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:14.270 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:14.270 + for nvme in "${!nvme_files[@]}" 00:02:14.270 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:14.270 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:14.270 + for nvme in "${!nvme_files[@]}" 00:02:14.270 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:14.270 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:14.270 + for nvme in "${!nvme_files[@]}" 00:02:14.270 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:14.270 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:14.270 + for nvme in "${!nvme_files[@]}" 00:02:14.270 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:14.270 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:14.270 + for nvme in "${!nvme_files[@]}" 00:02:14.270 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:14.270 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:14.270 + for nvme in "${!nvme_files[@]}" 00:02:14.270 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:14.536 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:14.536 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:14.536 + echo 'End stage prepare_nvme.sh' 00:02:14.536 End stage prepare_nvme.sh 00:02:14.550 [Pipeline] sh 00:02:14.832 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:14.832 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:02:14.832 00:02:14.832 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:14.832 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:14.832 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:14.832 HELP=0 00:02:14.832 DRY_RUN=0 00:02:14.832 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:02:14.832 NVME_DISKS_TYPE=nvme,nvme, 00:02:14.832 NVME_AUTO_CREATE=0 00:02:14.832 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:02:14.832 NVME_CMB=,, 00:02:14.832 NVME_PMR=,, 00:02:14.832 NVME_ZNS=,, 00:02:14.832 NVME_MS=,, 00:02:14.832 NVME_FDP=,, 00:02:14.832 SPDK_VAGRANT_DISTRO=fedora39 00:02:14.833 SPDK_VAGRANT_VMCPU=10 00:02:14.833 SPDK_VAGRANT_VMRAM=12288 00:02:14.833 SPDK_VAGRANT_PROVIDER=libvirt 00:02:14.833 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:14.833 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:14.833 SPDK_OPENSTACK_NETWORK=0 00:02:14.833 VAGRANT_PACKAGE_BOX=0 00:02:14.833 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:14.833 FORCE_DISTRO=true 00:02:14.833 VAGRANT_BOX_VERSION= 00:02:14.833 EXTRA_VAGRANTFILES= 00:02:14.833 NIC_MODEL=e1000 00:02:14.833 00:02:14.833 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:14.833 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:18.119 Bringing machine 'default' up with 'libvirt' provider... 00:02:18.686 ==> default: Creating image (snapshot of base box volume). 00:02:18.686 ==> default: Creating domain with the following settings... 00:02:18.686 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734312229_41b7f4e680a26d4b0138 00:02:18.686 ==> default: -- Domain type: kvm 00:02:18.686 ==> default: -- Cpus: 10 00:02:18.686 ==> default: -- Feature: acpi 00:02:18.686 ==> default: -- Feature: apic 00:02:18.686 ==> default: -- Feature: pae 00:02:18.686 ==> default: -- Memory: 12288M 00:02:18.686 ==> default: -- Memory Backing: hugepages: 00:02:18.686 ==> default: -- Management MAC: 00:02:18.686 ==> default: -- Loader: 00:02:18.686 ==> default: -- Nvram: 00:02:18.686 ==> default: -- Base box: spdk/fedora39 00:02:18.686 ==> default: -- Storage pool: default 00:02:18.686 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734312229_41b7f4e680a26d4b0138.img (20G) 00:02:18.686 ==> default: -- Volume Cache: default 00:02:18.686 ==> default: -- Kernel: 00:02:18.686 ==> default: -- Initrd: 00:02:18.686 ==> default: -- Graphics Type: vnc 00:02:18.686 ==> default: -- Graphics Port: -1 00:02:18.686 ==> default: -- Graphics IP: 127.0.0.1 00:02:18.686 ==> default: -- Graphics Password: Not defined 00:02:18.686 ==> default: -- Video Type: cirrus 00:02:18.686 ==> default: -- Video VRAM: 9216 00:02:18.686 ==> default: -- Sound Type: 00:02:18.686 ==> default: -- Keymap: en-us 00:02:18.686 ==> default: -- TPM Path: 00:02:18.686 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:18.686 ==> default: -- Command line args: 00:02:18.686 ==> default: -> value=-device, 00:02:18.686 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:18.686 ==> default: -> value=-drive, 00:02:18.686 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:18.686 ==> default: -> value=-device, 00:02:18.686 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.686 ==> default: -> value=-device, 00:02:18.686 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:18.686 ==> default: -> value=-drive, 00:02:18.686 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:18.686 ==> default: -> value=-device, 00:02:18.686 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.686 ==> default: -> value=-drive, 00:02:18.686 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:18.686 ==> default: -> value=-device, 00:02:18.686 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.686 ==> default: -> value=-drive, 00:02:18.686 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:18.686 ==> default: -> value=-device, 00:02:18.686 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:18.686 ==> default: Creating shared folders metadata... 00:02:18.686 ==> default: Starting domain. 00:02:20.062 ==> default: Waiting for domain to get an IP address... 00:02:38.141 ==> default: Waiting for SSH to become available... 00:02:38.141 ==> default: Configuring and enabling network interfaces... 00:02:40.671 default: SSH address: 192.168.121.175:22 00:02:40.671 default: SSH username: vagrant 00:02:40.671 default: SSH auth method: private key 00:02:42.571 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:49.160 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:55.725 ==> default: Mounting SSHFS shared folder... 00:02:56.661 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:56.661 ==> default: Checking Mount.. 00:02:58.038 ==> default: Folder Successfully Mounted! 00:02:58.038 ==> default: Running provisioner: file... 00:02:58.631 default: ~/.gitconfig => .gitconfig 00:02:59.199 00:02:59.199 SUCCESS! 00:02:59.199 00:02:59.199 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:59.199 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:59.199 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:59.199 00:02:59.209 [Pipeline] } 00:02:59.223 [Pipeline] // stage 00:02:59.233 [Pipeline] dir 00:02:59.233 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:59.235 [Pipeline] { 00:02:59.248 [Pipeline] catchError 00:02:59.250 [Pipeline] { 00:02:59.262 [Pipeline] sh 00:02:59.541 + vagrant ssh-config --host vagrant 00:02:59.541 + sed -ne /^Host/,$p 00:02:59.541 + tee ssh_conf 00:03:02.827 Host vagrant 00:03:02.827 HostName 192.168.121.175 00:03:02.827 User vagrant 00:03:02.827 Port 22 00:03:02.827 UserKnownHostsFile /dev/null 00:03:02.827 StrictHostKeyChecking no 00:03:02.827 PasswordAuthentication no 00:03:02.827 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:02.827 IdentitiesOnly yes 00:03:02.827 LogLevel FATAL 00:03:02.827 ForwardAgent yes 00:03:02.827 ForwardX11 yes 00:03:02.827 00:03:02.841 [Pipeline] withEnv 00:03:02.843 [Pipeline] { 00:03:02.857 [Pipeline] sh 00:03:03.137 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:03.137 source /etc/os-release 00:03:03.137 [[ -e /image.version ]] && img=$(< /image.version) 00:03:03.137 # Minimal, systemd-like check. 00:03:03.137 if [[ -e /.dockerenv ]]; then 00:03:03.137 # Clear garbage from the node's name: 00:03:03.137 # agt-er_autotest_547-896 -> autotest_547-896 00:03:03.137 # $HOSTNAME is the actual container id 00:03:03.137 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:03.137 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:03.137 # We can assume this is a mount from a host where container is running, 00:03:03.137 # so fetch its hostname to easily identify the target swarm worker. 00:03:03.137 container="$(< /etc/hostname) ($agent)" 00:03:03.137 else 00:03:03.137 # Fallback 00:03:03.137 container=$agent 00:03:03.137 fi 00:03:03.137 fi 00:03:03.137 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:03.137 00:03:03.409 [Pipeline] } 00:03:03.425 [Pipeline] // withEnv 00:03:03.434 [Pipeline] setCustomBuildProperty 00:03:03.449 [Pipeline] stage 00:03:03.452 [Pipeline] { (Tests) 00:03:03.469 [Pipeline] sh 00:03:03.749 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:04.021 [Pipeline] sh 00:03:04.302 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:04.583 [Pipeline] timeout 00:03:04.584 Timeout set to expire in 1 hr 0 min 00:03:04.588 [Pipeline] { 00:03:04.599 [Pipeline] sh 00:03:04.874 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:05.441 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:03:05.453 [Pipeline] sh 00:03:05.733 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:06.006 [Pipeline] sh 00:03:06.285 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:06.560 [Pipeline] sh 00:03:06.880 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:03:06.880 ++ readlink -f spdk_repo 00:03:06.880 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:06.880 + [[ -n /home/vagrant/spdk_repo ]] 00:03:06.880 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:06.880 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:06.880 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:06.880 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:06.880 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:06.880 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:03:06.880 + cd /home/vagrant/spdk_repo 00:03:06.880 + source /etc/os-release 00:03:06.880 ++ NAME='Fedora Linux' 00:03:06.880 ++ VERSION='39 (Cloud Edition)' 00:03:06.880 ++ ID=fedora 00:03:06.880 ++ VERSION_ID=39 00:03:06.880 ++ VERSION_CODENAME= 00:03:06.880 ++ PLATFORM_ID=platform:f39 00:03:06.880 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:06.880 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:06.880 ++ LOGO=fedora-logo-icon 00:03:06.880 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:06.880 ++ HOME_URL=https://fedoraproject.org/ 00:03:06.880 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:06.880 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:06.880 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:06.880 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:06.880 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:06.880 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:06.880 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:06.880 ++ SUPPORT_END=2024-11-12 00:03:06.880 ++ VARIANT='Cloud Edition' 00:03:06.880 ++ VARIANT_ID=cloud 00:03:06.880 + uname -a 00:03:06.880 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:06.880 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:07.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:07.478 Hugepages 00:03:07.478 node hugesize free / total 00:03:07.478 node0 1048576kB 0 / 0 00:03:07.478 node0 2048kB 0 / 0 00:03:07.478 00:03:07.478 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.478 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:07.478 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:07.478 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:07.478 + rm -f /tmp/spdk-ld-path 00:03:07.478 + source autorun-spdk.conf 00:03:07.478 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.478 ++ SPDK_TEST_NVMF=1 00:03:07.478 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:07.478 ++ SPDK_TEST_URING=1 00:03:07.478 ++ SPDK_TEST_VFIOUSER=1 00:03:07.478 ++ SPDK_TEST_USDT=1 00:03:07.478 ++ SPDK_RUN_UBSAN=1 00:03:07.478 ++ NET_TYPE=virt 00:03:07.478 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:07.478 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:07.478 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:07.478 ++ RUN_NIGHTLY=1 00:03:07.478 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:07.478 + [[ -n '' ]] 00:03:07.478 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:07.738 + for M in /var/spdk/build-*-manifest.txt 00:03:07.738 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:07.738 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:07.738 + for M in /var/spdk/build-*-manifest.txt 00:03:07.738 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:07.738 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:07.738 + for M in /var/spdk/build-*-manifest.txt 00:03:07.738 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:07.738 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:07.738 ++ uname 00:03:07.738 + [[ Linux == \L\i\n\u\x ]] 00:03:07.738 + sudo dmesg -T 00:03:07.738 + sudo dmesg --clear 00:03:07.738 + dmesg_pid=6000 00:03:07.738 + sudo dmesg -Tw 00:03:07.738 + [[ Fedora Linux == FreeBSD ]] 00:03:07.738 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.738 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:07.738 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:07.738 + [[ -x /usr/src/fio-static/fio ]] 00:03:07.738 + export FIO_BIN=/usr/src/fio-static/fio 00:03:07.738 + FIO_BIN=/usr/src/fio-static/fio 00:03:07.738 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:07.738 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:07.738 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:07.738 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.738 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:07.738 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:07.738 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.738 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:07.738 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.738 01:24:38 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:07.738 01:24:38 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@8 -- $ NET_TYPE=virt 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:07.738 01:24:38 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:03:07.738 01:24:38 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:07.738 01:24:38 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:07.738 01:24:38 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:03:07.738 01:24:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:07.738 01:24:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:07.738 01:24:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:07.738 01:24:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.738 01:24:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.738 01:24:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.738 01:24:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.738 01:24:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.738 01:24:38 -- paths/export.sh@5 -- $ export PATH 00:03:07.738 01:24:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.738 01:24:38 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:07.738 01:24:38 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:07.738 01:24:38 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734312278.XXXXXX 00:03:07.738 01:24:38 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734312278.89t6kB 00:03:07.738 01:24:38 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:07.738 01:24:38 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:03:07.738 01:24:38 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:07.998 01:24:38 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:07.998 01:24:38 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:07.998 01:24:38 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:07.998 01:24:38 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:07.998 01:24:38 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:07.998 01:24:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.998 01:24:38 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:03:07.998 01:24:38 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:07.998 01:24:38 -- pm/common@17 -- $ local monitor 00:03:07.998 01:24:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.998 01:24:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.998 01:24:38 -- pm/common@25 -- $ sleep 1 00:03:07.998 01:24:38 -- pm/common@21 -- $ date +%s 00:03:07.998 01:24:38 -- pm/common@21 -- $ date +%s 00:03:07.998 01:24:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734312278 00:03:07.998 01:24:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734312278 00:03:07.998 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734312278_collect-vmstat.pm.log 00:03:07.998 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734312278_collect-cpu-load.pm.log 00:03:08.935 01:24:39 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:08.935 01:24:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:08.935 01:24:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:08.935 01:24:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:08.935 01:24:39 -- spdk/autobuild.sh@16 -- $ date -u 00:03:08.935 Mon Dec 16 01:24:39 AM UTC 2024 00:03:08.935 01:24:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:08.935 v25.01-rc1-2-ge01cb43b8 00:03:08.935 01:24:39 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:08.935 01:24:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:08.935 01:24:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:08.935 01:24:39 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:08.935 01:24:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.935 01:24:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.936 ************************************ 00:03:08.936 START TEST ubsan 00:03:08.936 ************************************ 00:03:08.936 using ubsan 00:03:08.936 01:24:39 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:08.936 00:03:08.936 real 0m0.000s 00:03:08.936 user 0m0.000s 00:03:08.936 sys 0m0.000s 00:03:08.936 01:24:39 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:08.936 01:24:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:08.936 ************************************ 00:03:08.936 END TEST ubsan 00:03:08.936 ************************************ 00:03:08.936 01:24:39 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:08.936 01:24:39 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:08.936 01:24:39 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:08.936 01:24:39 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:08.936 01:24:39 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:08.936 01:24:39 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.936 ************************************ 00:03:08.936 START TEST build_native_dpdk 00:03:08.936 ************************************ 00:03:08.936 01:24:39 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:08.936 eeb0605f11 version: 23.11.0 00:03:08.936 238778122a doc: update release notes for 23.11 00:03:08.936 46aa6b3cfc doc: fix description of RSS features 00:03:08.936 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:08.936 7e421ae345 devtools: support skipping forbid rule check 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:08.936 patching file config/rte_config.h 00:03:08.936 Hunk #1 succeeded at 60 (offset 1 line). 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:03:08.936 patching file lib/pcapng/rte_pcapng.c 00:03:08.936 01:24:39 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:08.936 01:24:39 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:08.937 01:24:39 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:08.937 01:24:39 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:03:08.937 01:24:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:09.195 01:24:39 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:03:09.195 01:24:39 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:03:09.195 01:24:39 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:14.467 The Meson build system 00:03:14.467 Version: 1.5.0 00:03:14.467 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:14.467 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:14.467 Build type: native build 00:03:14.467 Program cat found: YES (/usr/bin/cat) 00:03:14.467 Project name: DPDK 00:03:14.467 Project version: 23.11.0 00:03:14.467 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:14.467 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:14.467 Host machine cpu family: x86_64 00:03:14.467 Host machine cpu: x86_64 00:03:14.467 Message: ## Building in Developer Mode ## 00:03:14.467 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:14.467 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:14.467 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:14.467 Program python3 found: YES (/usr/bin/python3) 00:03:14.467 Program cat found: YES (/usr/bin/cat) 00:03:14.467 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:14.467 Compiler for C supports arguments -march=native: YES 00:03:14.467 Checking for size of "void *" : 8 00:03:14.467 Checking for size of "void *" : 8 (cached) 00:03:14.467 Library m found: YES 00:03:14.467 Library numa found: YES 00:03:14.467 Has header "numaif.h" : YES 00:03:14.467 Library fdt found: NO 00:03:14.467 Library execinfo found: NO 00:03:14.467 Has header "execinfo.h" : YES 00:03:14.467 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:14.467 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:14.467 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:14.467 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:14.467 Run-time dependency openssl found: YES 3.1.1 00:03:14.467 Run-time dependency libpcap found: YES 1.10.4 00:03:14.467 Has header "pcap.h" with dependency libpcap: YES 00:03:14.467 Compiler for C supports arguments -Wcast-qual: YES 00:03:14.467 Compiler for C supports arguments -Wdeprecated: YES 00:03:14.467 Compiler for C supports arguments -Wformat: YES 00:03:14.467 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:14.467 Compiler for C supports arguments -Wformat-security: NO 00:03:14.467 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:14.467 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:14.467 Compiler for C supports arguments -Wnested-externs: YES 00:03:14.467 Compiler for C supports arguments -Wold-style-definition: YES 00:03:14.467 Compiler for C supports arguments -Wpointer-arith: YES 00:03:14.467 Compiler for C supports arguments -Wsign-compare: YES 00:03:14.467 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:14.467 Compiler for C supports arguments -Wundef: YES 00:03:14.467 Compiler for C supports arguments -Wwrite-strings: YES 00:03:14.467 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:14.467 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:14.467 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:14.467 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:14.467 Program objdump found: YES (/usr/bin/objdump) 00:03:14.467 Compiler for C supports arguments -mavx512f: YES 00:03:14.467 Checking if "AVX512 checking" compiles: YES 00:03:14.467 Fetching value of define "__SSE4_2__" : 1 00:03:14.467 Fetching value of define "__AES__" : 1 00:03:14.467 Fetching value of define "__AVX__" : 1 00:03:14.467 Fetching value of define "__AVX2__" : 1 00:03:14.467 Fetching value of define "__AVX512BW__" : (undefined) 00:03:14.467 Fetching value of define "__AVX512CD__" : (undefined) 00:03:14.467 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:14.467 Fetching value of define "__AVX512F__" : (undefined) 00:03:14.467 Fetching value of define "__AVX512VL__" : (undefined) 00:03:14.467 Fetching value of define "__PCLMUL__" : 1 00:03:14.467 Fetching value of define "__RDRND__" : 1 00:03:14.467 Fetching value of define "__RDSEED__" : 1 00:03:14.467 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:14.467 Fetching value of define "__znver1__" : (undefined) 00:03:14.467 Fetching value of define "__znver2__" : (undefined) 00:03:14.467 Fetching value of define "__znver3__" : (undefined) 00:03:14.467 Fetching value of define "__znver4__" : (undefined) 00:03:14.467 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:14.467 Message: lib/log: Defining dependency "log" 00:03:14.467 Message: lib/kvargs: Defining dependency "kvargs" 00:03:14.467 Message: lib/telemetry: Defining dependency "telemetry" 00:03:14.467 Checking for function "getentropy" : NO 00:03:14.467 Message: lib/eal: Defining dependency "eal" 00:03:14.467 Message: lib/ring: Defining dependency "ring" 00:03:14.467 Message: lib/rcu: Defining dependency "rcu" 00:03:14.467 Message: lib/mempool: Defining dependency "mempool" 00:03:14.467 Message: lib/mbuf: Defining dependency "mbuf" 00:03:14.467 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:14.467 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:14.467 Compiler for C supports arguments -mpclmul: YES 00:03:14.467 Compiler for C supports arguments -maes: YES 00:03:14.467 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:14.467 Compiler for C supports arguments -mavx512bw: YES 00:03:14.467 Compiler for C supports arguments -mavx512dq: YES 00:03:14.467 Compiler for C supports arguments -mavx512vl: YES 00:03:14.467 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:14.467 Compiler for C supports arguments -mavx2: YES 00:03:14.467 Compiler for C supports arguments -mavx: YES 00:03:14.467 Message: lib/net: Defining dependency "net" 00:03:14.467 Message: lib/meter: Defining dependency "meter" 00:03:14.467 Message: lib/ethdev: Defining dependency "ethdev" 00:03:14.467 Message: lib/pci: Defining dependency "pci" 00:03:14.467 Message: lib/cmdline: Defining dependency "cmdline" 00:03:14.467 Message: lib/metrics: Defining dependency "metrics" 00:03:14.467 Message: lib/hash: Defining dependency "hash" 00:03:14.467 Message: lib/timer: Defining dependency "timer" 00:03:14.467 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:14.467 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:14.467 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:14.467 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:14.467 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:14.467 Message: lib/acl: Defining dependency "acl" 00:03:14.467 Message: lib/bbdev: Defining dependency "bbdev" 00:03:14.467 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:14.467 Run-time dependency libelf found: YES 0.191 00:03:14.467 Message: lib/bpf: Defining dependency "bpf" 00:03:14.467 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:14.467 Message: lib/compressdev: Defining dependency "compressdev" 00:03:14.467 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:14.467 Message: lib/distributor: Defining dependency "distributor" 00:03:14.467 Message: lib/dmadev: Defining dependency "dmadev" 00:03:14.467 Message: lib/efd: Defining dependency "efd" 00:03:14.467 Message: lib/eventdev: Defining dependency "eventdev" 00:03:14.467 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:14.467 Message: lib/gpudev: Defining dependency "gpudev" 00:03:14.467 Message: lib/gro: Defining dependency "gro" 00:03:14.467 Message: lib/gso: Defining dependency "gso" 00:03:14.467 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:14.468 Message: lib/jobstats: Defining dependency "jobstats" 00:03:14.468 Message: lib/latencystats: Defining dependency "latencystats" 00:03:14.468 Message: lib/lpm: Defining dependency "lpm" 00:03:14.468 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:14.468 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:14.468 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:14.468 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:14.468 Message: lib/member: Defining dependency "member" 00:03:14.468 Message: lib/pcapng: Defining dependency "pcapng" 00:03:14.468 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:14.468 Message: lib/power: Defining dependency "power" 00:03:14.468 Message: lib/rawdev: Defining dependency "rawdev" 00:03:14.468 Message: lib/regexdev: Defining dependency "regexdev" 00:03:14.468 Message: lib/mldev: Defining dependency "mldev" 00:03:14.468 Message: lib/rib: Defining dependency "rib" 00:03:14.468 Message: lib/reorder: Defining dependency "reorder" 00:03:14.468 Message: lib/sched: Defining dependency "sched" 00:03:14.468 Message: lib/security: Defining dependency "security" 00:03:14.468 Message: lib/stack: Defining dependency "stack" 00:03:14.468 Has header "linux/userfaultfd.h" : YES 00:03:14.468 Has header "linux/vduse.h" : YES 00:03:14.468 Message: lib/vhost: Defining dependency "vhost" 00:03:14.468 Message: lib/ipsec: Defining dependency "ipsec" 00:03:14.468 Message: lib/pdcp: Defining dependency "pdcp" 00:03:14.468 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:14.468 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:14.468 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:14.468 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:14.468 Message: lib/fib: Defining dependency "fib" 00:03:14.468 Message: lib/port: Defining dependency "port" 00:03:14.468 Message: lib/pdump: Defining dependency "pdump" 00:03:14.468 Message: lib/table: Defining dependency "table" 00:03:14.468 Message: lib/pipeline: Defining dependency "pipeline" 00:03:14.468 Message: lib/graph: Defining dependency "graph" 00:03:14.468 Message: lib/node: Defining dependency "node" 00:03:14.468 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:16.370 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:16.370 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:16.370 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:16.370 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:16.370 Compiler for C supports arguments -Wno-unused-value: YES 00:03:16.370 Compiler for C supports arguments -Wno-format: YES 00:03:16.370 Compiler for C supports arguments -Wno-format-security: YES 00:03:16.370 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:16.370 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:16.370 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:16.370 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:16.370 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:16.370 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:16.370 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:16.370 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:16.370 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:16.370 Has header "sys/epoll.h" : YES 00:03:16.370 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:16.370 Configuring doxy-api-html.conf using configuration 00:03:16.370 Configuring doxy-api-man.conf using configuration 00:03:16.370 Program mandb found: YES (/usr/bin/mandb) 00:03:16.370 Program sphinx-build found: NO 00:03:16.370 Configuring rte_build_config.h using configuration 00:03:16.370 Message: 00:03:16.370 ================= 00:03:16.370 Applications Enabled 00:03:16.370 ================= 00:03:16.370 00:03:16.370 apps: 00:03:16.370 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:16.370 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:16.370 test-pmd, test-regex, test-sad, test-security-perf, 00:03:16.370 00:03:16.370 Message: 00:03:16.370 ================= 00:03:16.370 Libraries Enabled 00:03:16.370 ================= 00:03:16.370 00:03:16.370 libs: 00:03:16.370 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:16.370 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:16.370 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:16.370 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:16.370 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:16.370 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:16.370 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:16.370 00:03:16.370 00:03:16.370 Message: 00:03:16.370 =============== 00:03:16.370 Drivers Enabled 00:03:16.370 =============== 00:03:16.370 00:03:16.370 common: 00:03:16.370 00:03:16.370 bus: 00:03:16.370 pci, vdev, 00:03:16.370 mempool: 00:03:16.370 ring, 00:03:16.370 dma: 00:03:16.370 00:03:16.371 net: 00:03:16.371 i40e, 00:03:16.371 raw: 00:03:16.371 00:03:16.371 crypto: 00:03:16.371 00:03:16.371 compress: 00:03:16.371 00:03:16.371 regex: 00:03:16.371 00:03:16.371 ml: 00:03:16.371 00:03:16.371 vdpa: 00:03:16.371 00:03:16.371 event: 00:03:16.371 00:03:16.371 baseband: 00:03:16.371 00:03:16.371 gpu: 00:03:16.371 00:03:16.371 00:03:16.371 Message: 00:03:16.371 ================= 00:03:16.371 Content Skipped 00:03:16.371 ================= 00:03:16.371 00:03:16.371 apps: 00:03:16.371 00:03:16.371 libs: 00:03:16.371 00:03:16.371 drivers: 00:03:16.371 common/cpt: not in enabled drivers build config 00:03:16.371 common/dpaax: not in enabled drivers build config 00:03:16.371 common/iavf: not in enabled drivers build config 00:03:16.371 common/idpf: not in enabled drivers build config 00:03:16.371 common/mvep: not in enabled drivers build config 00:03:16.371 common/octeontx: not in enabled drivers build config 00:03:16.371 bus/auxiliary: not in enabled drivers build config 00:03:16.371 bus/cdx: not in enabled drivers build config 00:03:16.371 bus/dpaa: not in enabled drivers build config 00:03:16.371 bus/fslmc: not in enabled drivers build config 00:03:16.371 bus/ifpga: not in enabled drivers build config 00:03:16.371 bus/platform: not in enabled drivers build config 00:03:16.371 bus/vmbus: not in enabled drivers build config 00:03:16.371 common/cnxk: not in enabled drivers build config 00:03:16.371 common/mlx5: not in enabled drivers build config 00:03:16.371 common/nfp: not in enabled drivers build config 00:03:16.371 common/qat: not in enabled drivers build config 00:03:16.371 common/sfc_efx: not in enabled drivers build config 00:03:16.371 mempool/bucket: not in enabled drivers build config 00:03:16.371 mempool/cnxk: not in enabled drivers build config 00:03:16.371 mempool/dpaa: not in enabled drivers build config 00:03:16.371 mempool/dpaa2: not in enabled drivers build config 00:03:16.371 mempool/octeontx: not in enabled drivers build config 00:03:16.371 mempool/stack: not in enabled drivers build config 00:03:16.371 dma/cnxk: not in enabled drivers build config 00:03:16.371 dma/dpaa: not in enabled drivers build config 00:03:16.371 dma/dpaa2: not in enabled drivers build config 00:03:16.371 dma/hisilicon: not in enabled drivers build config 00:03:16.371 dma/idxd: not in enabled drivers build config 00:03:16.371 dma/ioat: not in enabled drivers build config 00:03:16.371 dma/skeleton: not in enabled drivers build config 00:03:16.371 net/af_packet: not in enabled drivers build config 00:03:16.371 net/af_xdp: not in enabled drivers build config 00:03:16.371 net/ark: not in enabled drivers build config 00:03:16.371 net/atlantic: not in enabled drivers build config 00:03:16.371 net/avp: not in enabled drivers build config 00:03:16.371 net/axgbe: not in enabled drivers build config 00:03:16.371 net/bnx2x: not in enabled drivers build config 00:03:16.371 net/bnxt: not in enabled drivers build config 00:03:16.371 net/bonding: not in enabled drivers build config 00:03:16.371 net/cnxk: not in enabled drivers build config 00:03:16.371 net/cpfl: not in enabled drivers build config 00:03:16.371 net/cxgbe: not in enabled drivers build config 00:03:16.371 net/dpaa: not in enabled drivers build config 00:03:16.371 net/dpaa2: not in enabled drivers build config 00:03:16.371 net/e1000: not in enabled drivers build config 00:03:16.371 net/ena: not in enabled drivers build config 00:03:16.371 net/enetc: not in enabled drivers build config 00:03:16.371 net/enetfec: not in enabled drivers build config 00:03:16.371 net/enic: not in enabled drivers build config 00:03:16.371 net/failsafe: not in enabled drivers build config 00:03:16.371 net/fm10k: not in enabled drivers build config 00:03:16.371 net/gve: not in enabled drivers build config 00:03:16.371 net/hinic: not in enabled drivers build config 00:03:16.371 net/hns3: not in enabled drivers build config 00:03:16.371 net/iavf: not in enabled drivers build config 00:03:16.371 net/ice: not in enabled drivers build config 00:03:16.371 net/idpf: not in enabled drivers build config 00:03:16.371 net/igc: not in enabled drivers build config 00:03:16.371 net/ionic: not in enabled drivers build config 00:03:16.371 net/ipn3ke: not in enabled drivers build config 00:03:16.371 net/ixgbe: not in enabled drivers build config 00:03:16.371 net/mana: not in enabled drivers build config 00:03:16.371 net/memif: not in enabled drivers build config 00:03:16.371 net/mlx4: not in enabled drivers build config 00:03:16.371 net/mlx5: not in enabled drivers build config 00:03:16.371 net/mvneta: not in enabled drivers build config 00:03:16.371 net/mvpp2: not in enabled drivers build config 00:03:16.371 net/netvsc: not in enabled drivers build config 00:03:16.371 net/nfb: not in enabled drivers build config 00:03:16.371 net/nfp: not in enabled drivers build config 00:03:16.371 net/ngbe: not in enabled drivers build config 00:03:16.371 net/null: not in enabled drivers build config 00:03:16.371 net/octeontx: not in enabled drivers build config 00:03:16.371 net/octeon_ep: not in enabled drivers build config 00:03:16.371 net/pcap: not in enabled drivers build config 00:03:16.371 net/pfe: not in enabled drivers build config 00:03:16.371 net/qede: not in enabled drivers build config 00:03:16.371 net/ring: not in enabled drivers build config 00:03:16.371 net/sfc: not in enabled drivers build config 00:03:16.371 net/softnic: not in enabled drivers build config 00:03:16.371 net/tap: not in enabled drivers build config 00:03:16.371 net/thunderx: not in enabled drivers build config 00:03:16.371 net/txgbe: not in enabled drivers build config 00:03:16.371 net/vdev_netvsc: not in enabled drivers build config 00:03:16.371 net/vhost: not in enabled drivers build config 00:03:16.371 net/virtio: not in enabled drivers build config 00:03:16.371 net/vmxnet3: not in enabled drivers build config 00:03:16.371 raw/cnxk_bphy: not in enabled drivers build config 00:03:16.371 raw/cnxk_gpio: not in enabled drivers build config 00:03:16.371 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:16.371 raw/ifpga: not in enabled drivers build config 00:03:16.371 raw/ntb: not in enabled drivers build config 00:03:16.371 raw/skeleton: not in enabled drivers build config 00:03:16.371 crypto/armv8: not in enabled drivers build config 00:03:16.371 crypto/bcmfs: not in enabled drivers build config 00:03:16.371 crypto/caam_jr: not in enabled drivers build config 00:03:16.371 crypto/ccp: not in enabled drivers build config 00:03:16.371 crypto/cnxk: not in enabled drivers build config 00:03:16.371 crypto/dpaa_sec: not in enabled drivers build config 00:03:16.371 crypto/dpaa2_sec: not in enabled drivers build config 00:03:16.371 crypto/ipsec_mb: not in enabled drivers build config 00:03:16.371 crypto/mlx5: not in enabled drivers build config 00:03:16.371 crypto/mvsam: not in enabled drivers build config 00:03:16.371 crypto/nitrox: not in enabled drivers build config 00:03:16.371 crypto/null: not in enabled drivers build config 00:03:16.371 crypto/octeontx: not in enabled drivers build config 00:03:16.371 crypto/openssl: not in enabled drivers build config 00:03:16.371 crypto/scheduler: not in enabled drivers build config 00:03:16.371 crypto/uadk: not in enabled drivers build config 00:03:16.371 crypto/virtio: not in enabled drivers build config 00:03:16.371 compress/isal: not in enabled drivers build config 00:03:16.371 compress/mlx5: not in enabled drivers build config 00:03:16.371 compress/octeontx: not in enabled drivers build config 00:03:16.371 compress/zlib: not in enabled drivers build config 00:03:16.371 regex/mlx5: not in enabled drivers build config 00:03:16.371 regex/cn9k: not in enabled drivers build config 00:03:16.371 ml/cnxk: not in enabled drivers build config 00:03:16.371 vdpa/ifc: not in enabled drivers build config 00:03:16.371 vdpa/mlx5: not in enabled drivers build config 00:03:16.371 vdpa/nfp: not in enabled drivers build config 00:03:16.371 vdpa/sfc: not in enabled drivers build config 00:03:16.371 event/cnxk: not in enabled drivers build config 00:03:16.371 event/dlb2: not in enabled drivers build config 00:03:16.371 event/dpaa: not in enabled drivers build config 00:03:16.371 event/dpaa2: not in enabled drivers build config 00:03:16.371 event/dsw: not in enabled drivers build config 00:03:16.371 event/opdl: not in enabled drivers build config 00:03:16.371 event/skeleton: not in enabled drivers build config 00:03:16.371 event/sw: not in enabled drivers build config 00:03:16.371 event/octeontx: not in enabled drivers build config 00:03:16.371 baseband/acc: not in enabled drivers build config 00:03:16.371 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:16.371 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:16.371 baseband/la12xx: not in enabled drivers build config 00:03:16.371 baseband/null: not in enabled drivers build config 00:03:16.371 baseband/turbo_sw: not in enabled drivers build config 00:03:16.371 gpu/cuda: not in enabled drivers build config 00:03:16.371 00:03:16.371 00:03:16.371 Build targets in project: 220 00:03:16.371 00:03:16.371 DPDK 23.11.0 00:03:16.371 00:03:16.371 User defined options 00:03:16.371 libdir : lib 00:03:16.371 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:16.371 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:16.371 c_link_args : 00:03:16.371 enable_docs : false 00:03:16.371 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:16.371 enable_kmods : false 00:03:16.371 machine : native 00:03:16.371 tests : false 00:03:16.371 00:03:16.371 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:16.371 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:16.371 01:24:46 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:16.630 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:16.630 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:16.630 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:16.630 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:16.630 [4/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:16.630 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:16.630 [6/710] Linking static target lib/librte_kvargs.a 00:03:16.630 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:16.889 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:16.889 [9/710] Linking static target lib/librte_log.a 00:03:16.889 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:16.889 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.148 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:17.148 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:17.148 [14/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.148 [15/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:17.148 [16/710] Linking target lib/librte_log.so.24.0 00:03:17.406 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:17.406 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:17.406 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:17.664 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:17.664 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:17.664 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:17.664 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:17.664 [24/710] Linking target lib/librte_kvargs.so.24.0 00:03:17.923 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:17.923 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:17.923 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:17.923 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:17.923 [29/710] Linking static target lib/librte_telemetry.a 00:03:17.923 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:17.923 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:18.182 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:18.182 [33/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.441 [34/710] Linking target lib/librte_telemetry.so.24.0 00:03:18.441 [35/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:18.441 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:18.441 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:18.441 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:18.441 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:18.441 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:18.441 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:18.441 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:18.441 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:18.441 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:18.699 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:18.958 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:18.958 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:18.958 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:18.958 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:18.958 [50/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:19.216 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:19.216 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:19.216 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:19.216 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:19.474 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:19.474 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:19.474 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:19.474 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:19.474 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:19.474 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:19.474 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:19.733 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:19.733 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:19.733 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:19.733 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:19.733 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:19.991 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:19.991 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:20.249 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:20.249 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:20.249 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:20.249 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:20.249 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:20.250 [74/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:20.250 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:20.250 [76/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:20.250 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:20.508 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:20.508 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:20.767 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:20.767 [81/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:20.767 [82/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:21.025 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:21.025 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:21.025 [85/710] Linking static target lib/librte_ring.a 00:03:21.025 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:21.025 [87/710] Linking static target lib/librte_eal.a 00:03:21.284 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:21.284 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.284 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:21.284 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:21.542 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:21.542 [93/710] Linking static target lib/librte_mempool.a 00:03:21.542 [94/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:21.542 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:21.801 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:21.801 [97/710] Linking static target lib/librte_rcu.a 00:03:21.801 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:21.801 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:22.059 [100/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.059 [101/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:22.059 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:22.059 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.060 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:22.060 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:22.318 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:22.318 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:22.318 [108/710] Linking static target lib/librte_mbuf.a 00:03:22.318 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:22.318 [110/710] Linking static target lib/librte_net.a 00:03:22.577 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:22.577 [112/710] Linking static target lib/librte_meter.a 00:03:22.577 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.836 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:22.836 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:22.836 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:22.836 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.836 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:22.836 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.403 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:23.662 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:23.921 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:23.921 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:23.921 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:23.921 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:23.921 [126/710] Linking static target lib/librte_pci.a 00:03:23.921 [127/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:24.179 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:24.179 [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.179 [130/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:24.179 [131/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:24.179 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:24.179 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:24.438 [134/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:24.438 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:24.438 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:24.438 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:24.438 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:24.438 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:24.438 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:24.696 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:24.696 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:24.696 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:24.954 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:24.954 [145/710] Linking static target lib/librte_cmdline.a 00:03:24.954 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:24.954 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:24.954 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:24.954 [149/710] Linking static target lib/librte_metrics.a 00:03:25.213 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:25.471 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.730 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.730 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:25.730 [154/710] Linking static target lib/librte_timer.a 00:03:25.730 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:25.988 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.247 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:26.506 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:26.506 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:26.506 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:27.074 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:27.074 [162/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:27.074 [163/710] Linking static target lib/librte_ethdev.a 00:03:27.332 [164/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:27.332 [165/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:27.332 [166/710] Linking static target lib/librte_bitratestats.a 00:03:27.332 [167/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.332 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:27.332 [169/710] Linking target lib/librte_eal.so.24.0 00:03:27.332 [170/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.332 [171/710] Linking static target lib/librte_bbdev.a 00:03:27.593 [172/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:27.593 [173/710] Linking target lib/librte_ring.so.24.0 00:03:27.903 [174/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:27.903 [175/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:27.903 [176/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:27.903 [177/710] Linking target lib/librte_meter.so.24.0 00:03:27.903 [178/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:27.903 [179/710] Linking target lib/librte_rcu.so.24.0 00:03:27.903 [180/710] Linking target lib/librte_mempool.so.24.0 00:03:27.903 [181/710] Linking target lib/librte_pci.so.24.0 00:03:27.903 [182/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:27.903 [183/710] Linking static target lib/librte_hash.a 00:03:27.903 [184/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:27.903 [185/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:27.903 [186/710] Linking target lib/librte_timer.so.24.0 00:03:27.903 [187/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:28.161 [188/710] Linking static target lib/acl/libavx2_tmp.a 00:03:28.161 [189/710] Linking target lib/librte_mbuf.so.24.0 00:03:28.161 [190/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:28.161 [191/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:28.161 [192/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.161 [193/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:28.161 [194/710] Linking static target lib/acl/libavx512_tmp.a 00:03:28.161 [195/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:28.161 [196/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:28.161 [197/710] Linking target lib/librte_net.so.24.0 00:03:28.419 [198/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:28.419 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:28.419 [200/710] Linking target lib/librte_cmdline.so.24.0 00:03:28.419 [201/710] Linking static target lib/librte_acl.a 00:03:28.419 [202/710] Linking target lib/librte_bbdev.so.24.0 00:03:28.419 [203/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.677 [204/710] Linking target lib/librte_hash.so.24.0 00:03:28.677 [205/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:28.677 [206/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.677 [207/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:28.677 [208/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:28.677 [209/710] Linking static target lib/librte_cfgfile.a 00:03:28.677 [210/710] Linking target lib/librte_acl.so.24.0 00:03:28.935 [211/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:28.935 [212/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:28.935 [213/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:29.193 [214/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.193 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:29.193 [216/710] Linking target lib/librte_cfgfile.so.24.0 00:03:29.451 [217/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:29.451 [218/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:29.451 [219/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:29.451 [220/710] Linking static target lib/librte_bpf.a 00:03:29.451 [221/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:29.709 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:29.709 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:29.709 [224/710] Linking static target lib/librte_compressdev.a 00:03:29.709 [225/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.967 [226/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:29.967 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:30.225 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:30.225 [229/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:30.225 [230/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:30.225 [231/710] Linking static target lib/librte_distributor.a 00:03:30.225 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.225 [233/710] Linking target lib/librte_compressdev.so.24.0 00:03:30.483 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.483 [235/710] Linking target lib/librte_distributor.so.24.0 00:03:30.483 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:30.483 [237/710] Linking static target lib/librte_dmadev.a 00:03:30.741 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:31.000 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.000 [240/710] Linking target lib/librte_dmadev.so.24.0 00:03:31.000 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:31.000 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:31.258 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:31.258 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:31.516 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:31.516 [246/710] Linking static target lib/librte_efd.a 00:03:31.774 [247/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.774 [248/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:31.774 [249/710] Linking static target lib/librte_cryptodev.a 00:03:31.774 [250/710] Linking target lib/librte_efd.so.24.0 00:03:31.774 [251/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:32.032 [252/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:32.289 [253/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.289 [254/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:32.289 [255/710] Linking static target lib/librte_dispatcher.a 00:03:32.289 [256/710] Linking target lib/librte_ethdev.so.24.0 00:03:32.289 [257/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:32.289 [258/710] Linking static target lib/librte_gpudev.a 00:03:32.289 [259/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:32.547 [260/710] Linking target lib/librte_metrics.so.24.0 00:03:32.547 [261/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:32.547 [262/710] Linking target lib/librte_bpf.so.24.0 00:03:32.547 [263/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:32.547 [264/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:32.547 [265/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:32.547 [266/710] Linking target lib/librte_bitratestats.so.24.0 00:03:32.547 [267/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.547 [268/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:32.805 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:32.806 [270/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.064 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:03:33.064 [272/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:33.064 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:33.064 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.322 [275/710] Linking target lib/librte_gpudev.so.24.0 00:03:33.322 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:33.322 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:33.322 [278/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:33.322 [279/710] Linking static target lib/librte_eventdev.a 00:03:33.322 [280/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:33.579 [281/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:33.579 [282/710] Linking static target lib/librte_gro.a 00:03:33.579 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:33.579 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:33.579 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:33.838 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.838 [287/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:33.838 [288/710] Linking target lib/librte_gro.so.24.0 00:03:33.838 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:33.838 [290/710] Linking static target lib/librte_gso.a 00:03:34.096 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.096 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:34.096 [293/710] Linking target lib/librte_gso.so.24.0 00:03:34.354 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:34.354 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:34.354 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:34.354 [297/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:34.354 [298/710] Linking static target lib/librte_jobstats.a 00:03:34.612 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:34.612 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:34.612 [301/710] Linking static target lib/librte_ip_frag.a 00:03:34.612 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:34.612 [303/710] Linking static target lib/librte_latencystats.a 00:03:34.612 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.612 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:34.870 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.870 [307/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.870 [308/710] Linking target lib/librte_ip_frag.so.24.0 00:03:34.870 [309/710] Linking target lib/librte_latencystats.so.24.0 00:03:34.870 [310/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:34.870 [311/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:34.870 [312/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:35.128 [313/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:35.128 [314/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:35.128 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:35.128 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:35.128 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:35.694 [318/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:35.694 [319/710] Linking static target lib/librte_lpm.a 00:03:35.694 [320/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.694 [321/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:35.694 [322/710] Linking target lib/librte_eventdev.so.24.0 00:03:35.694 [323/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:35.694 [324/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:35.694 [325/710] Linking target lib/librte_dispatcher.so.24.0 00:03:35.694 [326/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:35.952 [327/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:35.952 [328/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:35.952 [329/710] Linking static target lib/librte_pcapng.a 00:03:35.952 [330/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.952 [331/710] Linking target lib/librte_lpm.so.24.0 00:03:35.952 [332/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:35.952 [333/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:35.952 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:36.210 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.210 [336/710] Linking target lib/librte_pcapng.so.24.0 00:03:36.210 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:36.468 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:36.468 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:36.468 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:36.468 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:36.726 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:36.726 [343/710] Linking static target lib/librte_power.a 00:03:36.726 [344/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:36.726 [345/710] Linking static target lib/librte_regexdev.a 00:03:36.726 [346/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:36.726 [347/710] Linking static target lib/librte_rawdev.a 00:03:36.726 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:36.726 [349/710] Linking static target lib/librte_member.a 00:03:36.985 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:36.985 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:36.985 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:37.243 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.243 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:37.243 [355/710] Linking static target lib/librte_mldev.a 00:03:37.243 [356/710] Linking target lib/librte_member.so.24.0 00:03:37.243 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.243 [358/710] Linking target lib/librte_rawdev.so.24.0 00:03:37.243 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.243 [360/710] Linking target lib/librte_power.so.24.0 00:03:37.243 [361/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:37.501 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:37.501 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.501 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:37.759 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:37.759 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:37.759 [367/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:37.759 [368/710] Linking static target lib/librte_reorder.a 00:03:37.759 [369/710] Linking static target lib/librte_rib.a 00:03:37.759 [370/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:37.759 [371/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:38.018 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:38.018 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:38.018 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:38.018 [375/710] Linking static target lib/librte_stack.a 00:03:38.018 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.276 [377/710] Linking target lib/librte_reorder.so.24.0 00:03:38.276 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:38.276 [379/710] Linking static target lib/librte_security.a 00:03:38.276 [380/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.276 [381/710] Linking target lib/librte_rib.so.24.0 00:03:38.276 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:38.276 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.276 [384/710] Linking target lib/librte_stack.so.24.0 00:03:38.535 [385/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.535 [386/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:38.535 [387/710] Linking target lib/librte_mldev.so.24.0 00:03:38.535 [388/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.535 [389/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:38.535 [390/710] Linking target lib/librte_security.so.24.0 00:03:38.793 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:38.793 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:38.794 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:39.052 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:39.052 [395/710] Linking static target lib/librte_sched.a 00:03:39.311 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:39.311 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.311 [398/710] Linking target lib/librte_sched.so.24.0 00:03:39.570 [399/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:39.570 [400/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:39.570 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:39.570 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:39.828 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:40.087 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:40.087 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:40.345 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:40.345 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:40.604 [408/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:40.604 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:40.604 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:40.862 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:40.862 [412/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:40.862 [413/710] Linking static target lib/librte_ipsec.a 00:03:41.120 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:41.120 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:41.120 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.120 [417/710] Linking target lib/librte_ipsec.so.24.0 00:03:41.120 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:41.120 [419/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:41.120 [420/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:41.120 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:41.120 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:41.379 [423/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:42.316 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:42.316 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:42.316 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:42.316 [427/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:42.316 [428/710] Linking static target lib/librte_fib.a 00:03:42.316 [429/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:42.316 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:42.316 [431/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:42.316 [432/710] Linking static target lib/librte_pdcp.a 00:03:42.575 [433/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.575 [434/710] Linking target lib/librte_fib.so.24.0 00:03:42.575 [435/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.575 [436/710] Linking target lib/librte_pdcp.so.24.0 00:03:42.834 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:43.093 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:43.093 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:43.352 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:43.352 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:43.352 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:43.611 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:43.611 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:43.870 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:43.870 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:43.870 [447/710] Linking static target lib/librte_port.a 00:03:44.129 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:44.129 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:44.129 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:44.129 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:44.404 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.404 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:44.404 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:44.404 [455/710] Linking target lib/librte_port.so.24.0 00:03:44.404 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:44.404 [457/710] Linking static target lib/librte_pdump.a 00:03:44.712 [458/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:44.712 [459/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:44.712 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.712 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:44.712 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:45.278 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:45.278 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:45.279 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:45.279 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:45.279 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:45.537 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:45.796 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:45.796 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:45.796 [471/710] Linking static target lib/librte_table.a 00:03:45.796 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:45.796 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:46.364 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.364 [475/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:46.623 [476/710] Linking target lib/librte_table.so.24.0 00:03:46.623 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:46.623 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:46.882 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:46.882 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:47.141 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:47.141 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:47.400 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:47.400 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:47.400 [485/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:47.400 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:47.968 [487/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:47.968 [488/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:48.227 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:48.227 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:48.227 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:48.227 [492/710] Linking static target lib/librte_graph.a 00:03:48.227 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:48.795 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.795 [495/710] Linking target lib/librte_graph.so.24.0 00:03:48.795 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:48.795 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:48.795 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:48.795 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:49.363 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:49.363 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:49.363 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:49.363 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:49.363 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:49.622 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:49.622 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:49.881 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:49.881 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:50.140 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:50.140 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:50.140 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:50.140 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:50.140 [513/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:50.399 [514/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:50.399 [515/710] Linking static target lib/librte_node.a 00:03:50.658 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.658 [517/710] Linking target lib/librte_node.so.24.0 00:03:50.658 [518/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:50.658 [519/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:50.658 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:50.658 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:50.916 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:50.917 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:50.917 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:50.917 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:50.917 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:50.917 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:51.176 [528/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:51.176 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:51.176 [530/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.176 [531/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:51.176 [532/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:51.176 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:51.435 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:51.435 [535/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:51.435 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:51.435 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:51.694 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.694 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:51.694 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:51.694 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.694 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:51.694 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:51.694 [544/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:51.694 [545/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:51.952 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:52.211 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:52.470 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:52.729 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:52.729 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:52.729 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:53.666 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:53.666 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:53.666 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:53.666 [555/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:53.666 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:53.666 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:54.234 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:54.234 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:54.493 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:54.493 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:54.493 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:55.059 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:55.059 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:55.059 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:55.318 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:55.577 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:55.835 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:55.835 [569/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:55.835 [570/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:55.835 [571/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:55.835 [572/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:56.094 [573/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:56.353 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:56.353 [575/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:56.353 [576/710] Linking static target lib/librte_vhost.a 00:03:56.353 [577/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:56.353 [578/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:56.612 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:56.612 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:56.870 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:56.870 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:57.129 [583/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:57.129 [584/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:57.129 [585/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:57.129 [586/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:57.129 [587/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:57.129 [588/710] Linking static target drivers/librte_net_i40e.a 00:03:57.129 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:57.387 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:57.388 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:57.388 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:57.647 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.647 [594/710] Linking target lib/librte_vhost.so.24.0 00:03:57.647 [595/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:57.906 [596/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.906 [597/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:57.906 [598/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:57.906 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:58.484 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:58.484 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:58.484 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:58.742 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:58.742 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:58.742 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:59.001 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:59.001 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:59.260 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:59.519 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:59.519 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:59.519 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:59.519 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:59.519 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:59.777 [614/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:59.777 [615/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:59.777 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:59.777 [617/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:04:00.036 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:04:00.295 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:04:00.295 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:04:00.553 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:04:00.553 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:04:00.811 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:04:01.377 [624/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:04:01.377 [625/710] Linking static target lib/librte_pipeline.a 00:04:01.635 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:04:01.635 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:04:01.635 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:04:01.635 [629/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:04:01.894 [630/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:04:01.894 [631/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:04:01.894 [632/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:04:01.894 [633/710] Linking target app/dpdk-dumpcap 00:04:02.152 [634/710] Linking target app/dpdk-graph 00:04:02.152 [635/710] Linking target app/dpdk-pdump 00:04:02.152 [636/710] Linking target app/dpdk-proc-info 00:04:02.410 [637/710] Linking target app/dpdk-test-acl 00:04:02.410 [638/710] Linking target app/dpdk-test-cmdline 00:04:02.410 [639/710] Linking target app/dpdk-test-compress-perf 00:04:02.410 [640/710] Linking target app/dpdk-test-crypto-perf 00:04:02.668 [641/710] Linking target app/dpdk-test-dma-perf 00:04:02.668 [642/710] Linking target app/dpdk-test-fib 00:04:02.668 [643/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:04:02.926 [644/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:04:02.926 [645/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:04:03.183 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:04:03.183 [647/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:04:03.183 [648/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:04:03.183 [649/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:04:03.440 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:04:03.698 [651/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:04:03.698 [652/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:04:03.698 [653/710] Linking target app/dpdk-test-gpudev 00:04:03.698 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:04:03.698 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:04:03.956 [656/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:04:03.956 [657/710] Linking target app/dpdk-test-eventdev 00:04:04.214 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:04:04.214 [659/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.214 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:04:04.214 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:04:04.214 [662/710] Linking target lib/librte_pipeline.so.24.0 00:04:04.214 [663/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:04:04.214 [664/710] Linking target app/dpdk-test-flow-perf 00:04:04.473 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:04:04.473 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:04:04.809 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:04:04.809 [668/710] Linking target app/dpdk-test-bbdev 00:04:04.809 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:04:05.067 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:04:05.067 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:04:05.067 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:04:05.067 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:04:05.325 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:04:05.583 [675/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:04:05.583 [676/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:04:05.583 [677/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:04:05.583 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:04:05.841 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:04:05.841 [680/710] Linking target app/dpdk-test-mldev 00:04:05.841 [681/710] Linking target app/dpdk-test-pipeline 00:04:06.099 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:04:06.358 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:04:06.616 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:04:06.616 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:04:06.616 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:04:06.874 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:04:06.874 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:04:07.132 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:04:07.132 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:04:07.390 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:04:07.390 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:04:07.390 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:04:07.956 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:04:07.956 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:04:08.215 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:04:08.473 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:04:08.473 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:04:08.731 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:04:08.731 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:04:08.731 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:04:08.731 [702/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:04:08.731 [703/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:04:08.990 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:04:08.990 [705/710] Linking target app/dpdk-test-regex 00:04:09.248 [706/710] Linking target app/dpdk-test-sad 00:04:09.248 [707/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:04:09.507 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:04:09.766 [709/710] Linking target app/dpdk-testpmd 00:04:10.024 [710/710] Linking target app/dpdk-test-security-perf 00:04:10.024 01:25:40 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:04:10.024 01:25:40 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:04:10.024 01:25:40 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:04:10.024 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:04:10.024 [0/1] Installing files. 00:04:10.594 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.594 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.595 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.596 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.597 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.598 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.599 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.599 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.599 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:04:10.599 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:10.599 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:04:10.599 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:10.599 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:04:10.599 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.599 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.859 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.859 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.859 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.859 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:10.859 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.859 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:10.859 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.859 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:10.859 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:04:10.859 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:04:10.859 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.859 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:10.860 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.121 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.122 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.123 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:04:11.124 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:04:11.124 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:11.124 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:04:11.124 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:04:11.124 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:04:11.124 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:04:11.124 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:04:11.124 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:04:11.124 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:04:11.124 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:04:11.124 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:04:11.124 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:04:11.124 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:04:11.124 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:04:11.124 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:04:11.124 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:04:11.124 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:04:11.124 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:04:11.124 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:04:11.124 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:04:11.124 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:04:11.124 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:04:11.124 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:04:11.124 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:04:11.124 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:04:11.124 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:04:11.124 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:04:11.124 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:04:11.124 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:04:11.124 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:04:11.124 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:04:11.124 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:04:11.124 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:04:11.124 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:04:11.124 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:04:11.124 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:04:11.124 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:04:11.124 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:04:11.124 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:04:11.124 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:04:11.124 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:04:11.124 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:04:11.124 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:04:11.124 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:04:11.124 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:04:11.124 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:04:11.124 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:04:11.124 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:04:11.124 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:04:11.124 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:04:11.124 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:04:11.124 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:04:11.124 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:04:11.124 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:04:11.124 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:04:11.124 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:04:11.124 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:04:11.124 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:04:11.124 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:04:11.124 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:04:11.124 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:04:11.124 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:04:11.124 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:04:11.124 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:04:11.124 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:04:11.124 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:04:11.124 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:04:11.124 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:04:11.124 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:04:11.124 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:04:11.124 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:04:11.124 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:04:11.124 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:04:11.124 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:04:11.124 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:04:11.124 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:04:11.124 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:04:11.124 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:04:11.124 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:04:11.124 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:04:11.124 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:04:11.124 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:04:11.124 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:04:11.124 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:04:11.124 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:04:11.124 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:04:11.124 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:04:11.124 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:04:11.124 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:04:11.124 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:04:11.124 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:04:11.124 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:04:11.124 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:04:11.124 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:04:11.124 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:04:11.124 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:04:11.124 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:04:11.124 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:04:11.124 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:04:11.124 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:04:11.124 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:04:11.124 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:04:11.124 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:04:11.124 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:04:11.124 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:04:11.124 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:04:11.124 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:04:11.124 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:04:11.124 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:04:11.124 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:04:11.124 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:04:11.124 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:04:11.124 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:04:11.125 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:04:11.125 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:04:11.125 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:04:11.125 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:04:11.125 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:04:11.125 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:04:11.125 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:04:11.125 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:04:11.125 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:04:11.125 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:04:11.125 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:04:11.125 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:04:11.125 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:04:11.125 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:04:11.125 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:04:11.125 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:11.125 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:04:11.125 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:11.125 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:04:11.125 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:11.125 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:04:11.125 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:11.125 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:04:11.125 01:25:41 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:04:11.125 01:25:41 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:11.125 00:04:11.125 real 1m2.164s 00:04:11.125 user 7m35.744s 00:04:11.125 sys 1m4.848s 00:04:11.125 01:25:41 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:11.125 01:25:41 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:04:11.125 ************************************ 00:04:11.125 END TEST build_native_dpdk 00:04:11.125 ************************************ 00:04:11.125 01:25:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:11.125 01:25:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:11.125 01:25:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:11.125 01:25:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:11.125 01:25:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:11.125 01:25:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:11.125 01:25:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:11.125 01:25:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:04:11.384 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:04:11.384 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:04:11.384 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:04:11.384 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:11.951 Using 'verbs' RDMA provider 00:04:25.192 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:40.081 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:40.081 Creating mk/config.mk...done. 00:04:40.081 Creating mk/cc.flags.mk...done. 00:04:40.081 Type 'make' to build. 00:04:40.081 01:26:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:40.081 01:26:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:40.081 01:26:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:40.081 01:26:09 -- common/autotest_common.sh@10 -- $ set +x 00:04:40.081 ************************************ 00:04:40.081 START TEST make 00:04:40.081 ************************************ 00:04:40.081 01:26:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:40.081 The Meson build system 00:04:40.081 Version: 1.5.0 00:04:40.081 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:04:40.081 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:40.081 Build type: native build 00:04:40.081 Project name: libvfio-user 00:04:40.081 Project version: 0.0.1 00:04:40.081 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:40.081 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:40.081 Host machine cpu family: x86_64 00:04:40.081 Host machine cpu: x86_64 00:04:40.081 Run-time dependency threads found: YES 00:04:40.081 Library dl found: YES 00:04:40.081 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:40.081 Run-time dependency json-c found: YES 0.17 00:04:40.081 Run-time dependency cmocka found: YES 1.1.7 00:04:40.081 Program pytest-3 found: NO 00:04:40.081 Program flake8 found: NO 00:04:40.081 Program misspell-fixer found: NO 00:04:40.081 Program restructuredtext-lint found: NO 00:04:40.081 Program valgrind found: YES (/usr/bin/valgrind) 00:04:40.081 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:40.081 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:40.081 Compiler for C supports arguments -Wwrite-strings: YES 00:04:40.081 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:40.081 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:04:40.081 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:04:40.081 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:40.081 Build targets in project: 8 00:04:40.081 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:40.081 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:40.081 00:04:40.081 libvfio-user 0.0.1 00:04:40.081 00:04:40.081 User defined options 00:04:40.081 buildtype : debug 00:04:40.081 default_library: shared 00:04:40.081 libdir : /usr/local/lib 00:04:40.081 00:04:40.081 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:40.649 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:40.908 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:40.908 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:40.908 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:40.908 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:40.908 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:40.908 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:40.908 [7/37] Compiling C object samples/client.p/client.c.o 00:04:40.908 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:40.908 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:40.908 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:40.908 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:40.908 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:40.908 [13/37] Compiling C object samples/null.p/null.c.o 00:04:40.908 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:40.908 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:40.908 [16/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:41.166 [17/37] Linking target samples/client 00:04:41.166 [18/37] Compiling C object samples/server.p/server.c.o 00:04:41.166 [19/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:41.166 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:41.166 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:41.166 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:41.166 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:41.166 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:41.166 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:41.166 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:41.166 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:41.166 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:41.166 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:41.424 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:41.424 [31/37] Linking target test/unit_tests 00:04:41.424 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:41.425 [33/37] Linking target samples/null 00:04:41.425 [34/37] Linking target samples/server 00:04:41.425 [35/37] Linking target samples/shadow_ioeventfd_server 00:04:41.425 [36/37] Linking target samples/lspci 00:04:41.425 [37/37] Linking target samples/gpio-pci-idio-16 00:04:41.425 INFO: autodetecting backend as ninja 00:04:41.425 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:41.425 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:41.992 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:41.992 ninja: no work to do. 00:05:38.210 CC lib/ut/ut.o 00:05:38.210 CC lib/log/log_flags.o 00:05:38.210 CC lib/log/log.o 00:05:38.210 CC lib/log/log_deprecated.o 00:05:38.210 CC lib/ut_mock/mock.o 00:05:38.210 LIB libspdk_ut.a 00:05:38.210 LIB libspdk_log.a 00:05:38.210 LIB libspdk_ut_mock.a 00:05:38.210 SO libspdk_ut.so.2.0 00:05:38.210 SO libspdk_ut_mock.so.6.0 00:05:38.210 SO libspdk_log.so.7.1 00:05:38.210 SYMLINK libspdk_log.so 00:05:38.210 SYMLINK libspdk_ut.so 00:05:38.210 SYMLINK libspdk_ut_mock.so 00:05:38.210 CC lib/util/base64.o 00:05:38.210 CC lib/util/bit_array.o 00:05:38.210 CC lib/util/crc16.o 00:05:38.210 CC lib/util/cpuset.o 00:05:38.210 CC lib/util/crc32.o 00:05:38.210 CC lib/util/crc32c.o 00:05:38.210 CXX lib/trace_parser/trace.o 00:05:38.211 CC lib/ioat/ioat.o 00:05:38.211 CC lib/dma/dma.o 00:05:38.211 CC lib/util/crc32_ieee.o 00:05:38.211 CC lib/vfio_user/host/vfio_user_pci.o 00:05:38.211 CC lib/util/crc64.o 00:05:38.211 CC lib/util/dif.o 00:05:38.211 CC lib/util/fd.o 00:05:38.211 CC lib/util/fd_group.o 00:05:38.211 CC lib/vfio_user/host/vfio_user.o 00:05:38.211 CC lib/util/file.o 00:05:38.211 LIB libspdk_dma.a 00:05:38.211 CC lib/util/hexlify.o 00:05:38.211 LIB libspdk_ioat.a 00:05:38.211 SO libspdk_dma.so.5.0 00:05:38.211 SO libspdk_ioat.so.7.0 00:05:38.211 CC lib/util/iov.o 00:05:38.211 SYMLINK libspdk_dma.so 00:05:38.211 SYMLINK libspdk_ioat.so 00:05:38.211 CC lib/util/math.o 00:05:38.211 CC lib/util/net.o 00:05:38.211 CC lib/util/pipe.o 00:05:38.211 CC lib/util/strerror_tls.o 00:05:38.211 LIB libspdk_vfio_user.a 00:05:38.211 CC lib/util/string.o 00:05:38.211 CC lib/util/uuid.o 00:05:38.211 SO libspdk_vfio_user.so.5.0 00:05:38.211 CC lib/util/xor.o 00:05:38.211 CC lib/util/zipf.o 00:05:38.211 CC lib/util/md5.o 00:05:38.211 SYMLINK libspdk_vfio_user.so 00:05:38.211 LIB libspdk_util.a 00:05:38.211 SO libspdk_util.so.10.1 00:05:38.211 LIB libspdk_trace_parser.a 00:05:38.211 SYMLINK libspdk_util.so 00:05:38.211 SO libspdk_trace_parser.so.6.0 00:05:38.211 SYMLINK libspdk_trace_parser.so 00:05:38.211 CC lib/rdma_utils/rdma_utils.o 00:05:38.211 CC lib/vmd/led.o 00:05:38.211 CC lib/vmd/vmd.o 00:05:38.211 CC lib/idxd/idxd.o 00:05:38.211 CC lib/idxd/idxd_user.o 00:05:38.211 CC lib/idxd/idxd_kernel.o 00:05:38.211 CC lib/conf/conf.o 00:05:38.211 CC lib/env_dpdk/memory.o 00:05:38.211 CC lib/json/json_parse.o 00:05:38.211 CC lib/env_dpdk/env.o 00:05:38.211 CC lib/env_dpdk/pci.o 00:05:38.211 CC lib/env_dpdk/init.o 00:05:38.211 LIB libspdk_conf.a 00:05:38.211 CC lib/json/json_util.o 00:05:38.211 SO libspdk_conf.so.6.0 00:05:38.211 LIB libspdk_rdma_utils.a 00:05:38.211 CC lib/json/json_write.o 00:05:38.211 SYMLINK libspdk_conf.so 00:05:38.211 SO libspdk_rdma_utils.so.1.0 00:05:38.211 CC lib/env_dpdk/threads.o 00:05:38.211 SYMLINK libspdk_rdma_utils.so 00:05:38.211 CC lib/env_dpdk/pci_ioat.o 00:05:38.211 CC lib/env_dpdk/pci_virtio.o 00:05:38.211 CC lib/env_dpdk/pci_vmd.o 00:05:38.211 CC lib/env_dpdk/pci_idxd.o 00:05:38.211 CC lib/env_dpdk/pci_event.o 00:05:38.211 LIB libspdk_idxd.a 00:05:38.211 CC lib/env_dpdk/sigbus_handler.o 00:05:38.211 LIB libspdk_json.a 00:05:38.211 SO libspdk_idxd.so.12.1 00:05:38.211 CC lib/env_dpdk/pci_dpdk.o 00:05:38.211 SO libspdk_json.so.6.0 00:05:38.211 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:38.211 LIB libspdk_vmd.a 00:05:38.211 SYMLINK libspdk_idxd.so 00:05:38.211 SYMLINK libspdk_json.so 00:05:38.211 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:38.211 SO libspdk_vmd.so.6.0 00:05:38.211 SYMLINK libspdk_vmd.so 00:05:38.211 CC lib/rdma_provider/common.o 00:05:38.211 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:38.211 CC lib/jsonrpc/jsonrpc_server.o 00:05:38.211 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:38.211 CC lib/jsonrpc/jsonrpc_client.o 00:05:38.211 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:38.211 LIB libspdk_rdma_provider.a 00:05:38.211 SO libspdk_rdma_provider.so.7.0 00:05:38.211 SYMLINK libspdk_rdma_provider.so 00:05:38.211 LIB libspdk_jsonrpc.a 00:05:38.211 SO libspdk_jsonrpc.so.6.0 00:05:38.211 SYMLINK libspdk_jsonrpc.so 00:05:38.211 LIB libspdk_env_dpdk.a 00:05:38.211 SO libspdk_env_dpdk.so.15.1 00:05:38.211 CC lib/rpc/rpc.o 00:05:38.211 SYMLINK libspdk_env_dpdk.so 00:05:38.211 LIB libspdk_rpc.a 00:05:38.211 SO libspdk_rpc.so.6.0 00:05:38.211 SYMLINK libspdk_rpc.so 00:05:38.211 CC lib/trace/trace_flags.o 00:05:38.211 CC lib/trace/trace.o 00:05:38.211 CC lib/trace/trace_rpc.o 00:05:38.211 CC lib/notify/notify.o 00:05:38.211 CC lib/notify/notify_rpc.o 00:05:38.211 CC lib/keyring/keyring_rpc.o 00:05:38.211 CC lib/keyring/keyring.o 00:05:38.211 LIB libspdk_notify.a 00:05:38.211 SO libspdk_notify.so.6.0 00:05:38.211 SYMLINK libspdk_notify.so 00:05:38.211 LIB libspdk_trace.a 00:05:38.211 LIB libspdk_keyring.a 00:05:38.211 SO libspdk_keyring.so.2.0 00:05:38.211 SO libspdk_trace.so.11.0 00:05:38.211 SYMLINK libspdk_keyring.so 00:05:38.211 SYMLINK libspdk_trace.so 00:05:38.211 CC lib/thread/thread.o 00:05:38.211 CC lib/thread/iobuf.o 00:05:38.211 CC lib/sock/sock_rpc.o 00:05:38.211 CC lib/sock/sock.o 00:05:38.211 LIB libspdk_sock.a 00:05:38.211 SO libspdk_sock.so.10.0 00:05:38.211 SYMLINK libspdk_sock.so 00:05:38.211 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:38.211 CC lib/nvme/nvme_ctrlr.o 00:05:38.211 CC lib/nvme/nvme_fabric.o 00:05:38.211 CC lib/nvme/nvme_ns_cmd.o 00:05:38.211 CC lib/nvme/nvme_ns.o 00:05:38.211 CC lib/nvme/nvme_pcie.o 00:05:38.211 CC lib/nvme/nvme_qpair.o 00:05:38.211 CC lib/nvme/nvme_pcie_common.o 00:05:38.211 CC lib/nvme/nvme.o 00:05:38.211 LIB libspdk_thread.a 00:05:38.211 SO libspdk_thread.so.11.0 00:05:38.211 CC lib/nvme/nvme_quirks.o 00:05:38.211 SYMLINK libspdk_thread.so 00:05:38.211 CC lib/nvme/nvme_transport.o 00:05:38.211 CC lib/nvme/nvme_discovery.o 00:05:38.211 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:38.211 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:38.211 CC lib/nvme/nvme_tcp.o 00:05:38.211 CC lib/accel/accel.o 00:05:38.211 CC lib/accel/accel_rpc.o 00:05:38.211 CC lib/nvme/nvme_opal.o 00:05:38.211 CC lib/accel/accel_sw.o 00:05:38.211 CC lib/nvme/nvme_io_msg.o 00:05:38.211 CC lib/nvme/nvme_poll_group.o 00:05:38.211 CC lib/blob/blobstore.o 00:05:38.211 CC lib/virtio/virtio.o 00:05:38.211 CC lib/init/json_config.o 00:05:38.211 CC lib/vfu_tgt/tgt_endpoint.o 00:05:38.211 CC lib/fsdev/fsdev.o 00:05:38.211 CC lib/init/subsystem.o 00:05:38.211 CC lib/virtio/virtio_vhost_user.o 00:05:38.211 CC lib/init/subsystem_rpc.o 00:05:38.211 CC lib/vfu_tgt/tgt_rpc.o 00:05:38.211 LIB libspdk_accel.a 00:05:38.211 SO libspdk_accel.so.16.0 00:05:38.211 CC lib/init/rpc.o 00:05:38.211 CC lib/blob/request.o 00:05:38.211 CC lib/blob/zeroes.o 00:05:38.211 LIB libspdk_vfu_tgt.a 00:05:38.211 CC lib/virtio/virtio_vfio_user.o 00:05:38.211 SYMLINK libspdk_accel.so 00:05:38.211 CC lib/virtio/virtio_pci.o 00:05:38.211 SO libspdk_vfu_tgt.so.3.0 00:05:38.211 CC lib/blob/blob_bs_dev.o 00:05:38.211 SYMLINK libspdk_vfu_tgt.so 00:05:38.211 CC lib/fsdev/fsdev_io.o 00:05:38.211 LIB libspdk_init.a 00:05:38.211 CC lib/fsdev/fsdev_rpc.o 00:05:38.211 CC lib/nvme/nvme_zns.o 00:05:38.470 SO libspdk_init.so.6.0 00:05:38.470 CC lib/nvme/nvme_stubs.o 00:05:38.470 SYMLINK libspdk_init.so 00:05:38.470 CC lib/nvme/nvme_auth.o 00:05:38.470 CC lib/nvme/nvme_cuse.o 00:05:38.470 CC lib/bdev/bdev.o 00:05:38.470 CC lib/bdev/bdev_rpc.o 00:05:38.470 LIB libspdk_virtio.a 00:05:38.470 CC lib/bdev/bdev_zone.o 00:05:38.470 SO libspdk_virtio.so.7.0 00:05:38.470 SYMLINK libspdk_virtio.so 00:05:38.728 CC lib/bdev/part.o 00:05:38.728 LIB libspdk_fsdev.a 00:05:38.728 SO libspdk_fsdev.so.2.0 00:05:38.728 CC lib/bdev/scsi_nvme.o 00:05:38.728 SYMLINK libspdk_fsdev.so 00:05:38.728 CC lib/event/app.o 00:05:38.728 CC lib/event/reactor.o 00:05:38.987 CC lib/event/log_rpc.o 00:05:38.987 CC lib/event/app_rpc.o 00:05:38.987 CC lib/event/scheduler_static.o 00:05:39.245 CC lib/nvme/nvme_vfio_user.o 00:05:39.245 CC lib/nvme/nvme_rdma.o 00:05:39.245 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:39.245 LIB libspdk_event.a 00:05:39.245 SO libspdk_event.so.14.0 00:05:39.503 SYMLINK libspdk_event.so 00:05:39.762 LIB libspdk_fuse_dispatcher.a 00:05:39.762 SO libspdk_fuse_dispatcher.so.1.0 00:05:40.021 SYMLINK libspdk_fuse_dispatcher.so 00:05:40.589 LIB libspdk_blob.a 00:05:40.589 SO libspdk_blob.so.12.0 00:05:40.589 LIB libspdk_nvme.a 00:05:40.589 SYMLINK libspdk_blob.so 00:05:40.848 SO libspdk_nvme.so.15.0 00:05:40.848 CC lib/blobfs/tree.o 00:05:40.848 CC lib/blobfs/blobfs.o 00:05:40.848 CC lib/lvol/lvol.o 00:05:41.108 SYMLINK libspdk_nvme.so 00:05:41.108 LIB libspdk_bdev.a 00:05:41.108 SO libspdk_bdev.so.17.0 00:05:41.366 SYMLINK libspdk_bdev.so 00:05:41.625 CC lib/nbd/nbd.o 00:05:41.625 CC lib/nbd/nbd_rpc.o 00:05:41.625 CC lib/ftl/ftl_core.o 00:05:41.625 CC lib/ftl/ftl_init.o 00:05:41.625 CC lib/ublk/ublk.o 00:05:41.625 CC lib/ftl/ftl_layout.o 00:05:41.625 CC lib/scsi/dev.o 00:05:41.625 CC lib/nvmf/ctrlr.o 00:05:41.625 CC lib/nvmf/ctrlr_discovery.o 00:05:41.625 CC lib/scsi/lun.o 00:05:41.884 LIB libspdk_blobfs.a 00:05:41.884 CC lib/nvmf/ctrlr_bdev.o 00:05:41.884 SO libspdk_blobfs.so.11.0 00:05:41.884 CC lib/scsi/port.o 00:05:41.884 SYMLINK libspdk_blobfs.so 00:05:41.884 CC lib/scsi/scsi.o 00:05:41.884 LIB libspdk_lvol.a 00:05:41.884 SO libspdk_lvol.so.11.0 00:05:41.884 LIB libspdk_nbd.a 00:05:41.884 SO libspdk_nbd.so.7.0 00:05:41.884 CC lib/ftl/ftl_debug.o 00:05:41.884 SYMLINK libspdk_lvol.so 00:05:41.884 CC lib/scsi/scsi_bdev.o 00:05:42.142 CC lib/nvmf/subsystem.o 00:05:42.143 SYMLINK libspdk_nbd.so 00:05:42.143 CC lib/nvmf/nvmf.o 00:05:42.143 CC lib/nvmf/nvmf_rpc.o 00:05:42.143 CC lib/nvmf/transport.o 00:05:42.143 CC lib/ublk/ublk_rpc.o 00:05:42.143 CC lib/ftl/ftl_io.o 00:05:42.143 CC lib/nvmf/tcp.o 00:05:42.402 LIB libspdk_ublk.a 00:05:42.402 SO libspdk_ublk.so.3.0 00:05:42.402 SYMLINK libspdk_ublk.so 00:05:42.402 CC lib/nvmf/stubs.o 00:05:42.402 CC lib/nvmf/mdns_server.o 00:05:42.402 CC lib/scsi/scsi_pr.o 00:05:42.402 CC lib/ftl/ftl_sb.o 00:05:42.660 CC lib/ftl/ftl_l2p.o 00:05:42.919 CC lib/nvmf/vfio_user.o 00:05:42.919 CC lib/scsi/scsi_rpc.o 00:05:42.919 CC lib/nvmf/rdma.o 00:05:42.919 CC lib/nvmf/auth.o 00:05:42.919 CC lib/ftl/ftl_l2p_flat.o 00:05:42.919 CC lib/scsi/task.o 00:05:42.919 CC lib/ftl/ftl_nv_cache.o 00:05:43.177 CC lib/ftl/ftl_band.o 00:05:43.177 LIB libspdk_scsi.a 00:05:43.177 CC lib/ftl/ftl_band_ops.o 00:05:43.177 CC lib/ftl/ftl_writer.o 00:05:43.177 SO libspdk_scsi.so.9.0 00:05:43.436 SYMLINK libspdk_scsi.so 00:05:43.436 CC lib/ftl/ftl_rq.o 00:05:43.436 CC lib/ftl/ftl_reloc.o 00:05:43.436 CC lib/ftl/ftl_l2p_cache.o 00:05:43.436 CC lib/ftl/ftl_p2l.o 00:05:43.694 CC lib/ftl/ftl_p2l_log.o 00:05:43.694 CC lib/ftl/mngt/ftl_mngt.o 00:05:43.694 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:43.953 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:43.953 CC lib/iscsi/conn.o 00:05:43.953 CC lib/iscsi/init_grp.o 00:05:43.953 CC lib/iscsi/iscsi.o 00:05:43.953 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:43.953 CC lib/vhost/vhost.o 00:05:43.953 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:44.212 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:44.212 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:44.212 CC lib/iscsi/param.o 00:05:44.212 CC lib/vhost/vhost_rpc.o 00:05:44.471 CC lib/vhost/vhost_scsi.o 00:05:44.471 CC lib/iscsi/portal_grp.o 00:05:44.471 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:44.471 CC lib/vhost/vhost_blk.o 00:05:44.471 CC lib/iscsi/tgt_node.o 00:05:44.471 CC lib/vhost/rte_vhost_user.o 00:05:44.471 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:44.729 CC lib/iscsi/iscsi_subsystem.o 00:05:44.998 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:44.998 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:44.998 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:44.998 LIB libspdk_nvmf.a 00:05:44.998 CC lib/iscsi/iscsi_rpc.o 00:05:44.998 CC lib/iscsi/task.o 00:05:45.309 SO libspdk_nvmf.so.20.0 00:05:45.309 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:45.309 CC lib/ftl/utils/ftl_conf.o 00:05:45.309 CC lib/ftl/utils/ftl_md.o 00:05:45.309 CC lib/ftl/utils/ftl_mempool.o 00:05:45.309 SYMLINK libspdk_nvmf.so 00:05:45.309 CC lib/ftl/utils/ftl_bitmap.o 00:05:45.309 CC lib/ftl/utils/ftl_property.o 00:05:45.309 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:45.309 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:45.309 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:45.309 LIB libspdk_iscsi.a 00:05:45.581 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:45.581 SO libspdk_iscsi.so.8.0 00:05:45.581 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:45.581 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:45.581 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:45.582 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:45.582 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:45.582 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:45.582 SYMLINK libspdk_iscsi.so 00:05:45.582 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:45.582 LIB libspdk_vhost.a 00:05:45.582 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:45.841 SO libspdk_vhost.so.8.0 00:05:45.841 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:45.841 CC lib/ftl/base/ftl_base_dev.o 00:05:45.841 CC lib/ftl/base/ftl_base_bdev.o 00:05:45.841 CC lib/ftl/ftl_trace.o 00:05:45.841 SYMLINK libspdk_vhost.so 00:05:46.099 LIB libspdk_ftl.a 00:05:46.358 SO libspdk_ftl.so.9.0 00:05:46.618 SYMLINK libspdk_ftl.so 00:05:46.879 CC module/vfu_device/vfu_virtio.o 00:05:46.879 CC module/env_dpdk/env_dpdk_rpc.o 00:05:46.879 CC module/keyring/linux/keyring.o 00:05:46.879 CC module/accel/error/accel_error.o 00:05:46.879 CC module/sock/posix/posix.o 00:05:46.879 CC module/accel/ioat/accel_ioat.o 00:05:46.879 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:46.879 CC module/fsdev/aio/fsdev_aio.o 00:05:46.879 CC module/keyring/file/keyring.o 00:05:46.879 CC module/blob/bdev/blob_bdev.o 00:05:47.137 LIB libspdk_env_dpdk_rpc.a 00:05:47.137 SO libspdk_env_dpdk_rpc.so.6.0 00:05:47.137 SYMLINK libspdk_env_dpdk_rpc.so 00:05:47.137 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:47.137 CC module/keyring/linux/keyring_rpc.o 00:05:47.137 CC module/keyring/file/keyring_rpc.o 00:05:47.137 CC module/accel/ioat/accel_ioat_rpc.o 00:05:47.137 CC module/accel/error/accel_error_rpc.o 00:05:47.137 LIB libspdk_scheduler_dynamic.a 00:05:47.137 SO libspdk_scheduler_dynamic.so.4.0 00:05:47.396 LIB libspdk_blob_bdev.a 00:05:47.396 SYMLINK libspdk_scheduler_dynamic.so 00:05:47.396 CC module/fsdev/aio/linux_aio_mgr.o 00:05:47.396 LIB libspdk_keyring_file.a 00:05:47.396 SO libspdk_blob_bdev.so.12.0 00:05:47.396 LIB libspdk_keyring_linux.a 00:05:47.396 SO libspdk_keyring_file.so.2.0 00:05:47.396 LIB libspdk_accel_ioat.a 00:05:47.396 LIB libspdk_accel_error.a 00:05:47.396 SO libspdk_keyring_linux.so.1.0 00:05:47.396 SO libspdk_accel_ioat.so.6.0 00:05:47.396 SYMLINK libspdk_blob_bdev.so 00:05:47.396 CC module/vfu_device/vfu_virtio_blk.o 00:05:47.396 SO libspdk_accel_error.so.2.0 00:05:47.396 SYMLINK libspdk_keyring_file.so 00:05:47.396 SYMLINK libspdk_keyring_linux.so 00:05:47.396 SYMLINK libspdk_accel_ioat.so 00:05:47.396 SYMLINK libspdk_accel_error.so 00:05:47.655 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:47.655 CC module/vfu_device/vfu_virtio_scsi.o 00:05:47.655 CC module/accel/dsa/accel_dsa.o 00:05:47.655 CC module/accel/iaa/accel_iaa.o 00:05:47.655 CC module/sock/uring/uring.o 00:05:47.655 CC module/scheduler/gscheduler/gscheduler.o 00:05:47.655 LIB libspdk_fsdev_aio.a 00:05:47.655 CC module/accel/iaa/accel_iaa_rpc.o 00:05:47.655 LIB libspdk_scheduler_dpdk_governor.a 00:05:47.655 SO libspdk_fsdev_aio.so.1.0 00:05:47.655 LIB libspdk_sock_posix.a 00:05:47.655 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:47.655 SO libspdk_sock_posix.so.6.0 00:05:47.913 SYMLINK libspdk_fsdev_aio.so 00:05:47.913 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:47.913 CC module/bdev/delay/vbdev_delay.o 00:05:47.913 LIB libspdk_scheduler_gscheduler.a 00:05:47.913 SYMLINK libspdk_sock_posix.so 00:05:47.913 SO libspdk_scheduler_gscheduler.so.4.0 00:05:47.913 LIB libspdk_accel_iaa.a 00:05:47.913 SO libspdk_accel_iaa.so.3.0 00:05:47.913 CC module/vfu_device/vfu_virtio_rpc.o 00:05:47.913 SYMLINK libspdk_scheduler_gscheduler.so 00:05:47.913 CC module/vfu_device/vfu_virtio_fs.o 00:05:47.913 CC module/accel/dsa/accel_dsa_rpc.o 00:05:47.913 SYMLINK libspdk_accel_iaa.so 00:05:47.913 CC module/bdev/error/vbdev_error.o 00:05:47.913 CC module/bdev/gpt/gpt.o 00:05:47.913 CC module/bdev/lvol/vbdev_lvol.o 00:05:47.913 CC module/bdev/malloc/bdev_malloc.o 00:05:48.172 LIB libspdk_accel_dsa.a 00:05:48.172 SO libspdk_accel_dsa.so.5.0 00:05:48.172 CC module/bdev/null/bdev_null.o 00:05:48.172 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:48.172 SYMLINK libspdk_accel_dsa.so 00:05:48.172 LIB libspdk_vfu_device.a 00:05:48.172 CC module/bdev/gpt/vbdev_gpt.o 00:05:48.172 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:48.172 SO libspdk_vfu_device.so.3.0 00:05:48.172 CC module/bdev/error/vbdev_error_rpc.o 00:05:48.431 SYMLINK libspdk_vfu_device.so 00:05:48.431 LIB libspdk_sock_uring.a 00:05:48.431 SO libspdk_sock_uring.so.5.0 00:05:48.431 LIB libspdk_bdev_malloc.a 00:05:48.431 LIB libspdk_bdev_delay.a 00:05:48.431 CC module/bdev/null/bdev_null_rpc.o 00:05:48.431 SO libspdk_bdev_malloc.so.6.0 00:05:48.431 SO libspdk_bdev_delay.so.6.0 00:05:48.431 LIB libspdk_bdev_error.a 00:05:48.431 SYMLINK libspdk_sock_uring.so 00:05:48.431 SO libspdk_bdev_error.so.6.0 00:05:48.431 LIB libspdk_bdev_gpt.a 00:05:48.431 SYMLINK libspdk_bdev_delay.so 00:05:48.431 CC module/blobfs/bdev/blobfs_bdev.o 00:05:48.431 SYMLINK libspdk_bdev_malloc.so 00:05:48.431 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:48.432 SO libspdk_bdev_gpt.so.6.0 00:05:48.690 SYMLINK libspdk_bdev_error.so 00:05:48.690 CC module/bdev/nvme/bdev_nvme.o 00:05:48.690 CC module/bdev/passthru/vbdev_passthru.o 00:05:48.690 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:48.690 SYMLINK libspdk_bdev_gpt.so 00:05:48.690 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:48.690 LIB libspdk_bdev_null.a 00:05:48.690 SO libspdk_bdev_null.so.6.0 00:05:48.690 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:48.690 CC module/bdev/split/vbdev_split.o 00:05:48.690 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:48.690 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:48.690 SYMLINK libspdk_bdev_null.so 00:05:48.690 CC module/bdev/raid/bdev_raid.o 00:05:48.690 CC module/bdev/raid/bdev_raid_rpc.o 00:05:48.950 LIB libspdk_bdev_passthru.a 00:05:48.950 LIB libspdk_blobfs_bdev.a 00:05:48.950 LIB libspdk_bdev_lvol.a 00:05:48.950 SO libspdk_bdev_passthru.so.6.0 00:05:48.950 SO libspdk_blobfs_bdev.so.6.0 00:05:48.950 SO libspdk_bdev_lvol.so.6.0 00:05:48.950 SYMLINK libspdk_bdev_passthru.so 00:05:48.950 CC module/bdev/split/vbdev_split_rpc.o 00:05:48.950 CC module/bdev/raid/bdev_raid_sb.o 00:05:48.950 SYMLINK libspdk_blobfs_bdev.so 00:05:48.950 CC module/bdev/raid/raid0.o 00:05:48.950 SYMLINK libspdk_bdev_lvol.so 00:05:48.950 CC module/bdev/raid/raid1.o 00:05:49.209 LIB libspdk_bdev_zone_block.a 00:05:49.209 SO libspdk_bdev_zone_block.so.6.0 00:05:49.209 CC module/bdev/uring/bdev_uring.o 00:05:49.209 LIB libspdk_bdev_split.a 00:05:49.209 CC module/bdev/aio/bdev_aio.o 00:05:49.209 SYMLINK libspdk_bdev_zone_block.so 00:05:49.209 SO libspdk_bdev_split.so.6.0 00:05:49.209 CC module/bdev/raid/concat.o 00:05:49.209 CC module/bdev/aio/bdev_aio_rpc.o 00:05:49.209 SYMLINK libspdk_bdev_split.so 00:05:49.209 CC module/bdev/nvme/nvme_rpc.o 00:05:49.467 CC module/bdev/uring/bdev_uring_rpc.o 00:05:49.467 CC module/bdev/nvme/bdev_mdns_client.o 00:05:49.467 CC module/bdev/iscsi/bdev_iscsi.o 00:05:49.467 CC module/bdev/ftl/bdev_ftl.o 00:05:49.467 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:49.467 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:49.467 LIB libspdk_bdev_aio.a 00:05:49.467 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:49.467 SO libspdk_bdev_aio.so.6.0 00:05:49.726 LIB libspdk_bdev_uring.a 00:05:49.726 SYMLINK libspdk_bdev_aio.so 00:05:49.726 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:49.726 SO libspdk_bdev_uring.so.6.0 00:05:49.726 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:49.726 SYMLINK libspdk_bdev_uring.so 00:05:49.726 CC module/bdev/nvme/vbdev_opal.o 00:05:49.726 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:49.726 LIB libspdk_bdev_ftl.a 00:05:49.726 LIB libspdk_bdev_raid.a 00:05:49.726 SO libspdk_bdev_ftl.so.6.0 00:05:49.985 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:49.985 SO libspdk_bdev_raid.so.6.0 00:05:49.985 LIB libspdk_bdev_iscsi.a 00:05:49.985 SYMLINK libspdk_bdev_ftl.so 00:05:49.985 SO libspdk_bdev_iscsi.so.6.0 00:05:49.985 SYMLINK libspdk_bdev_raid.so 00:05:49.985 SYMLINK libspdk_bdev_iscsi.so 00:05:49.985 LIB libspdk_bdev_virtio.a 00:05:50.244 SO libspdk_bdev_virtio.so.6.0 00:05:50.244 SYMLINK libspdk_bdev_virtio.so 00:05:51.181 LIB libspdk_bdev_nvme.a 00:05:51.181 SO libspdk_bdev_nvme.so.7.1 00:05:51.181 SYMLINK libspdk_bdev_nvme.so 00:05:51.749 CC module/event/subsystems/vmd/vmd.o 00:05:51.749 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:51.749 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:51.749 CC module/event/subsystems/keyring/keyring.o 00:05:51.749 CC module/event/subsystems/sock/sock.o 00:05:51.749 CC module/event/subsystems/iobuf/iobuf.o 00:05:51.749 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:51.749 CC module/event/subsystems/scheduler/scheduler.o 00:05:51.749 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:51.749 CC module/event/subsystems/fsdev/fsdev.o 00:05:52.008 LIB libspdk_event_vfu_tgt.a 00:05:52.008 LIB libspdk_event_vhost_blk.a 00:05:52.008 LIB libspdk_event_keyring.a 00:05:52.008 LIB libspdk_event_vmd.a 00:05:52.008 LIB libspdk_event_scheduler.a 00:05:52.008 LIB libspdk_event_sock.a 00:05:52.008 SO libspdk_event_vhost_blk.so.3.0 00:05:52.008 SO libspdk_event_vfu_tgt.so.3.0 00:05:52.008 LIB libspdk_event_fsdev.a 00:05:52.008 LIB libspdk_event_iobuf.a 00:05:52.008 SO libspdk_event_keyring.so.1.0 00:05:52.008 SO libspdk_event_vmd.so.6.0 00:05:52.008 SO libspdk_event_scheduler.so.4.0 00:05:52.008 SO libspdk_event_sock.so.5.0 00:05:52.008 SO libspdk_event_fsdev.so.1.0 00:05:52.008 SO libspdk_event_iobuf.so.3.0 00:05:52.008 SYMLINK libspdk_event_vhost_blk.so 00:05:52.008 SYMLINK libspdk_event_keyring.so 00:05:52.008 SYMLINK libspdk_event_vfu_tgt.so 00:05:52.008 SYMLINK libspdk_event_scheduler.so 00:05:52.008 SYMLINK libspdk_event_sock.so 00:05:52.008 SYMLINK libspdk_event_vmd.so 00:05:52.008 SYMLINK libspdk_event_fsdev.so 00:05:52.008 SYMLINK libspdk_event_iobuf.so 00:05:52.267 CC module/event/subsystems/accel/accel.o 00:05:52.525 LIB libspdk_event_accel.a 00:05:52.525 SO libspdk_event_accel.so.6.0 00:05:52.525 SYMLINK libspdk_event_accel.so 00:05:53.093 CC module/event/subsystems/bdev/bdev.o 00:05:53.093 LIB libspdk_event_bdev.a 00:05:53.093 SO libspdk_event_bdev.so.6.0 00:05:53.353 SYMLINK libspdk_event_bdev.so 00:05:53.353 CC module/event/subsystems/ublk/ublk.o 00:05:53.353 CC module/event/subsystems/scsi/scsi.o 00:05:53.353 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:53.353 CC module/event/subsystems/nbd/nbd.o 00:05:53.353 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:53.611 LIB libspdk_event_ublk.a 00:05:53.611 LIB libspdk_event_nbd.a 00:05:53.611 LIB libspdk_event_scsi.a 00:05:53.611 SO libspdk_event_ublk.so.3.0 00:05:53.611 SO libspdk_event_nbd.so.6.0 00:05:53.611 SO libspdk_event_scsi.so.6.0 00:05:53.870 SYMLINK libspdk_event_ublk.so 00:05:53.870 SYMLINK libspdk_event_nbd.so 00:05:53.870 SYMLINK libspdk_event_scsi.so 00:05:53.870 LIB libspdk_event_nvmf.a 00:05:53.870 SO libspdk_event_nvmf.so.6.0 00:05:53.870 SYMLINK libspdk_event_nvmf.so 00:05:54.129 CC module/event/subsystems/iscsi/iscsi.o 00:05:54.129 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:54.130 LIB libspdk_event_vhost_scsi.a 00:05:54.130 LIB libspdk_event_iscsi.a 00:05:54.389 SO libspdk_event_vhost_scsi.so.3.0 00:05:54.389 SO libspdk_event_iscsi.so.6.0 00:05:54.389 SYMLINK libspdk_event_vhost_scsi.so 00:05:54.389 SYMLINK libspdk_event_iscsi.so 00:05:54.648 SO libspdk.so.6.0 00:05:54.648 SYMLINK libspdk.so 00:05:54.908 CC test/rpc_client/rpc_client_test.o 00:05:54.908 CXX app/trace/trace.o 00:05:54.908 CC app/trace_record/trace_record.o 00:05:54.908 TEST_HEADER include/spdk/accel.h 00:05:54.908 TEST_HEADER include/spdk/accel_module.h 00:05:54.908 TEST_HEADER include/spdk/assert.h 00:05:54.908 TEST_HEADER include/spdk/barrier.h 00:05:54.908 TEST_HEADER include/spdk/base64.h 00:05:54.908 TEST_HEADER include/spdk/bdev.h 00:05:54.908 TEST_HEADER include/spdk/bdev_module.h 00:05:54.908 TEST_HEADER include/spdk/bdev_zone.h 00:05:54.908 TEST_HEADER include/spdk/bit_array.h 00:05:54.908 TEST_HEADER include/spdk/bit_pool.h 00:05:54.908 TEST_HEADER include/spdk/blob_bdev.h 00:05:54.908 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:54.908 TEST_HEADER include/spdk/blobfs.h 00:05:54.908 TEST_HEADER include/spdk/blob.h 00:05:54.908 TEST_HEADER include/spdk/conf.h 00:05:54.908 TEST_HEADER include/spdk/config.h 00:05:54.908 TEST_HEADER include/spdk/cpuset.h 00:05:54.908 TEST_HEADER include/spdk/crc16.h 00:05:54.908 TEST_HEADER include/spdk/crc32.h 00:05:54.908 TEST_HEADER include/spdk/crc64.h 00:05:54.908 TEST_HEADER include/spdk/dif.h 00:05:54.908 TEST_HEADER include/spdk/dma.h 00:05:54.908 TEST_HEADER include/spdk/endian.h 00:05:54.908 TEST_HEADER include/spdk/env_dpdk.h 00:05:54.908 TEST_HEADER include/spdk/env.h 00:05:54.908 CC app/nvmf_tgt/nvmf_main.o 00:05:54.908 TEST_HEADER include/spdk/event.h 00:05:54.908 TEST_HEADER include/spdk/fd_group.h 00:05:54.908 TEST_HEADER include/spdk/fd.h 00:05:54.908 TEST_HEADER include/spdk/file.h 00:05:54.908 TEST_HEADER include/spdk/fsdev.h 00:05:54.908 TEST_HEADER include/spdk/fsdev_module.h 00:05:54.908 TEST_HEADER include/spdk/ftl.h 00:05:54.908 TEST_HEADER include/spdk/gpt_spec.h 00:05:54.908 TEST_HEADER include/spdk/hexlify.h 00:05:54.908 TEST_HEADER include/spdk/histogram_data.h 00:05:54.908 TEST_HEADER include/spdk/idxd.h 00:05:54.908 TEST_HEADER include/spdk/idxd_spec.h 00:05:54.908 TEST_HEADER include/spdk/init.h 00:05:54.908 TEST_HEADER include/spdk/ioat.h 00:05:54.908 TEST_HEADER include/spdk/ioat_spec.h 00:05:54.908 TEST_HEADER include/spdk/iscsi_spec.h 00:05:54.908 TEST_HEADER include/spdk/json.h 00:05:54.908 TEST_HEADER include/spdk/jsonrpc.h 00:05:54.908 TEST_HEADER include/spdk/keyring.h 00:05:54.908 TEST_HEADER include/spdk/keyring_module.h 00:05:54.908 CC test/thread/poller_perf/poller_perf.o 00:05:54.908 TEST_HEADER include/spdk/likely.h 00:05:54.908 TEST_HEADER include/spdk/log.h 00:05:54.908 TEST_HEADER include/spdk/lvol.h 00:05:54.908 TEST_HEADER include/spdk/md5.h 00:05:54.908 TEST_HEADER include/spdk/memory.h 00:05:54.908 CC examples/util/zipf/zipf.o 00:05:54.908 TEST_HEADER include/spdk/mmio.h 00:05:54.908 TEST_HEADER include/spdk/nbd.h 00:05:54.908 TEST_HEADER include/spdk/net.h 00:05:54.908 TEST_HEADER include/spdk/notify.h 00:05:54.908 TEST_HEADER include/spdk/nvme.h 00:05:54.908 TEST_HEADER include/spdk/nvme_intel.h 00:05:54.908 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:54.908 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:54.908 TEST_HEADER include/spdk/nvme_spec.h 00:05:54.908 CC test/app/bdev_svc/bdev_svc.o 00:05:54.908 TEST_HEADER include/spdk/nvme_zns.h 00:05:54.908 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:54.908 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:54.908 CC test/dma/test_dma/test_dma.o 00:05:54.908 TEST_HEADER include/spdk/nvmf.h 00:05:54.908 TEST_HEADER include/spdk/nvmf_spec.h 00:05:54.908 TEST_HEADER include/spdk/nvmf_transport.h 00:05:54.908 TEST_HEADER include/spdk/opal.h 00:05:54.908 TEST_HEADER include/spdk/opal_spec.h 00:05:54.908 TEST_HEADER include/spdk/pci_ids.h 00:05:54.908 TEST_HEADER include/spdk/pipe.h 00:05:54.908 TEST_HEADER include/spdk/queue.h 00:05:54.908 TEST_HEADER include/spdk/reduce.h 00:05:54.908 TEST_HEADER include/spdk/rpc.h 00:05:54.908 TEST_HEADER include/spdk/scheduler.h 00:05:54.908 TEST_HEADER include/spdk/scsi.h 00:05:54.908 TEST_HEADER include/spdk/scsi_spec.h 00:05:54.908 TEST_HEADER include/spdk/sock.h 00:05:54.908 CC test/env/mem_callbacks/mem_callbacks.o 00:05:54.908 TEST_HEADER include/spdk/stdinc.h 00:05:54.908 TEST_HEADER include/spdk/string.h 00:05:54.908 TEST_HEADER include/spdk/thread.h 00:05:54.908 TEST_HEADER include/spdk/trace.h 00:05:54.908 TEST_HEADER include/spdk/trace_parser.h 00:05:54.908 LINK rpc_client_test 00:05:55.167 TEST_HEADER include/spdk/tree.h 00:05:55.167 TEST_HEADER include/spdk/ublk.h 00:05:55.167 TEST_HEADER include/spdk/util.h 00:05:55.167 TEST_HEADER include/spdk/uuid.h 00:05:55.167 TEST_HEADER include/spdk/version.h 00:05:55.167 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:55.167 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:55.167 TEST_HEADER include/spdk/vhost.h 00:05:55.167 TEST_HEADER include/spdk/vmd.h 00:05:55.167 TEST_HEADER include/spdk/xor.h 00:05:55.167 TEST_HEADER include/spdk/zipf.h 00:05:55.167 CXX test/cpp_headers/accel.o 00:05:55.167 LINK nvmf_tgt 00:05:55.167 LINK spdk_trace_record 00:05:55.167 LINK poller_perf 00:05:55.167 LINK zipf 00:05:55.167 LINK bdev_svc 00:05:55.167 LINK spdk_trace 00:05:55.167 CXX test/cpp_headers/accel_module.o 00:05:55.426 CC test/app/histogram_perf/histogram_perf.o 00:05:55.426 CC test/app/jsoncat/jsoncat.o 00:05:55.426 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:55.426 CXX test/cpp_headers/assert.o 00:05:55.426 CC examples/ioat/verify/verify.o 00:05:55.426 CC examples/ioat/perf/perf.o 00:05:55.426 LINK test_dma 00:05:55.426 CC examples/vmd/lsvmd/lsvmd.o 00:05:55.685 LINK jsoncat 00:05:55.685 LINK histogram_perf 00:05:55.685 CC app/iscsi_tgt/iscsi_tgt.o 00:05:55.685 CXX test/cpp_headers/barrier.o 00:05:55.685 LINK mem_callbacks 00:05:55.685 CXX test/cpp_headers/base64.o 00:05:55.685 LINK lsvmd 00:05:55.685 LINK verify 00:05:55.944 LINK ioat_perf 00:05:55.944 LINK nvme_fuzz 00:05:55.944 LINK iscsi_tgt 00:05:55.944 CXX test/cpp_headers/bdev.o 00:05:55.944 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:55.944 CC test/env/vtophys/vtophys.o 00:05:55.944 CC examples/vmd/led/led.o 00:05:55.944 CC examples/idxd/perf/perf.o 00:05:55.944 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:56.203 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:56.203 CC examples/thread/thread/thread_ex.o 00:05:56.203 LINK vtophys 00:05:56.203 CXX test/cpp_headers/bdev_module.o 00:05:56.203 LINK interrupt_tgt 00:05:56.203 LINK led 00:05:56.203 CC test/event/event_perf/event_perf.o 00:05:56.203 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:56.203 CC app/spdk_tgt/spdk_tgt.o 00:05:56.203 CXX test/cpp_headers/bdev_zone.o 00:05:56.462 LINK idxd_perf 00:05:56.462 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:56.462 LINK thread 00:05:56.462 LINK event_perf 00:05:56.462 CC test/event/reactor/reactor.o 00:05:56.462 CC test/event/reactor_perf/reactor_perf.o 00:05:56.462 CXX test/cpp_headers/bit_array.o 00:05:56.462 LINK spdk_tgt 00:05:56.462 CXX test/cpp_headers/bit_pool.o 00:05:56.462 LINK env_dpdk_post_init 00:05:56.462 LINK reactor 00:05:56.721 CC test/event/app_repeat/app_repeat.o 00:05:56.721 LINK reactor_perf 00:05:56.721 LINK vhost_fuzz 00:05:56.721 CXX test/cpp_headers/blob_bdev.o 00:05:56.721 CC app/spdk_lspci/spdk_lspci.o 00:05:56.721 LINK app_repeat 00:05:56.721 CC examples/sock/hello_world/hello_sock.o 00:05:56.721 CC test/env/memory/memory_ut.o 00:05:56.721 CC app/spdk_nvme_perf/perf.o 00:05:56.980 CC app/spdk_nvme_identify/identify.o 00:05:56.980 CC test/event/scheduler/scheduler.o 00:05:56.980 CXX test/cpp_headers/blobfs_bdev.o 00:05:56.980 CC app/spdk_nvme_discover/discovery_aer.o 00:05:56.980 LINK spdk_lspci 00:05:56.980 CC app/spdk_top/spdk_top.o 00:05:56.980 LINK hello_sock 00:05:57.238 CXX test/cpp_headers/blobfs.o 00:05:57.238 LINK spdk_nvme_discover 00:05:57.238 LINK scheduler 00:05:57.238 CXX test/cpp_headers/blob.o 00:05:57.238 CXX test/cpp_headers/conf.o 00:05:57.238 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:57.238 CXX test/cpp_headers/config.o 00:05:57.497 CXX test/cpp_headers/cpuset.o 00:05:57.497 CC test/nvme/aer/aer.o 00:05:57.497 CC app/vhost/vhost.o 00:05:57.497 CXX test/cpp_headers/crc16.o 00:05:57.497 CC app/spdk_dd/spdk_dd.o 00:05:57.756 LINK hello_fsdev 00:05:57.756 LINK iscsi_fuzz 00:05:57.756 LINK spdk_nvme_identify 00:05:57.756 LINK spdk_nvme_perf 00:05:57.756 CXX test/cpp_headers/crc32.o 00:05:57.756 LINK vhost 00:05:57.756 LINK aer 00:05:58.015 CXX test/cpp_headers/crc64.o 00:05:58.015 LINK spdk_top 00:05:58.015 CXX test/cpp_headers/dif.o 00:05:58.015 CC test/app/stub/stub.o 00:05:58.015 LINK memory_ut 00:05:58.015 CC examples/accel/perf/accel_perf.o 00:05:58.015 CC test/nvme/reset/reset.o 00:05:58.015 LINK spdk_dd 00:05:58.015 CC test/accel/dif/dif.o 00:05:58.015 CC test/blobfs/mkfs/mkfs.o 00:05:58.015 CXX test/cpp_headers/dma.o 00:05:58.274 LINK stub 00:05:58.274 CC test/nvme/sgl/sgl.o 00:05:58.274 CC app/fio/nvme/fio_plugin.o 00:05:58.274 CC test/env/pci/pci_ut.o 00:05:58.274 CXX test/cpp_headers/endian.o 00:05:58.274 LINK mkfs 00:05:58.274 LINK reset 00:05:58.274 CC test/nvme/e2edp/nvme_dp.o 00:05:58.533 CXX test/cpp_headers/env_dpdk.o 00:05:58.533 LINK sgl 00:05:58.533 CC examples/blob/hello_world/hello_blob.o 00:05:58.533 LINK accel_perf 00:05:58.792 CXX test/cpp_headers/env.o 00:05:58.792 LINK nvme_dp 00:05:58.792 CC app/fio/bdev/fio_plugin.o 00:05:58.792 CC examples/blob/cli/blobcli.o 00:05:58.792 LINK pci_ut 00:05:58.792 CXX test/cpp_headers/event.o 00:05:58.792 LINK dif 00:05:58.792 LINK hello_blob 00:05:58.792 LINK spdk_nvme 00:05:58.792 CC examples/nvme/hello_world/hello_world.o 00:05:59.051 CXX test/cpp_headers/fd_group.o 00:05:59.051 CXX test/cpp_headers/fd.o 00:05:59.051 CXX test/cpp_headers/file.o 00:05:59.051 CC test/nvme/overhead/overhead.o 00:05:59.051 CXX test/cpp_headers/fsdev.o 00:05:59.051 CXX test/cpp_headers/fsdev_module.o 00:05:59.051 CC examples/bdev/hello_world/hello_bdev.o 00:05:59.051 CXX test/cpp_headers/ftl.o 00:05:59.051 LINK hello_world 00:05:59.310 LINK blobcli 00:05:59.310 LINK spdk_bdev 00:05:59.310 LINK overhead 00:05:59.310 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:59.310 CC examples/nvme/reconnect/reconnect.o 00:05:59.310 CXX test/cpp_headers/gpt_spec.o 00:05:59.310 LINK hello_bdev 00:05:59.310 CC test/bdev/bdevio/bdevio.o 00:05:59.310 CC examples/nvme/arbitration/arbitration.o 00:05:59.310 CC test/lvol/esnap/esnap.o 00:05:59.310 CXX test/cpp_headers/hexlify.o 00:05:59.568 CXX test/cpp_headers/histogram_data.o 00:05:59.568 CC test/nvme/err_injection/err_injection.o 00:05:59.568 CXX test/cpp_headers/idxd.o 00:05:59.568 LINK err_injection 00:05:59.568 CC examples/nvme/hotplug/hotplug.o 00:05:59.568 LINK reconnect 00:05:59.826 CC examples/bdev/bdevperf/bdevperf.o 00:05:59.826 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:59.826 LINK arbitration 00:05:59.827 LINK nvme_manage 00:05:59.827 CXX test/cpp_headers/idxd_spec.o 00:05:59.827 LINK bdevio 00:05:59.827 CXX test/cpp_headers/init.o 00:05:59.827 LINK cmb_copy 00:05:59.827 CC test/nvme/startup/startup.o 00:05:59.827 LINK hotplug 00:06:00.085 CC examples/nvme/abort/abort.o 00:06:00.085 CXX test/cpp_headers/ioat.o 00:06:00.085 CXX test/cpp_headers/ioat_spec.o 00:06:00.085 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:00.085 CC test/nvme/reserve/reserve.o 00:06:00.085 LINK startup 00:06:00.085 CC test/nvme/simple_copy/simple_copy.o 00:06:00.085 CC test/nvme/connect_stress/connect_stress.o 00:06:00.344 CXX test/cpp_headers/iscsi_spec.o 00:06:00.344 LINK pmr_persistence 00:06:00.344 CC test/nvme/boot_partition/boot_partition.o 00:06:00.344 LINK reserve 00:06:00.344 LINK abort 00:06:00.344 CC test/nvme/compliance/nvme_compliance.o 00:06:00.344 CXX test/cpp_headers/json.o 00:06:00.344 LINK connect_stress 00:06:00.344 LINK simple_copy 00:06:00.602 LINK boot_partition 00:06:00.602 CC test/nvme/fused_ordering/fused_ordering.o 00:06:00.602 LINK bdevperf 00:06:00.602 CXX test/cpp_headers/jsonrpc.o 00:06:00.602 CXX test/cpp_headers/keyring.o 00:06:00.602 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:00.602 CC test/nvme/fdp/fdp.o 00:06:00.602 CC test/nvme/cuse/cuse.o 00:06:00.602 CXX test/cpp_headers/keyring_module.o 00:06:00.602 LINK nvme_compliance 00:06:00.861 CXX test/cpp_headers/likely.o 00:06:00.861 LINK fused_ordering 00:06:00.861 CXX test/cpp_headers/log.o 00:06:00.861 LINK doorbell_aers 00:06:00.861 CXX test/cpp_headers/lvol.o 00:06:00.861 CXX test/cpp_headers/md5.o 00:06:00.861 CXX test/cpp_headers/memory.o 00:06:00.861 CXX test/cpp_headers/mmio.o 00:06:01.120 CXX test/cpp_headers/nbd.o 00:06:01.120 CXX test/cpp_headers/net.o 00:06:01.120 LINK fdp 00:06:01.120 CXX test/cpp_headers/notify.o 00:06:01.120 CXX test/cpp_headers/nvme.o 00:06:01.120 CXX test/cpp_headers/nvme_intel.o 00:06:01.120 CC examples/nvmf/nvmf/nvmf.o 00:06:01.120 CXX test/cpp_headers/nvme_ocssd.o 00:06:01.120 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:01.120 CXX test/cpp_headers/nvme_spec.o 00:06:01.120 CXX test/cpp_headers/nvme_zns.o 00:06:01.120 CXX test/cpp_headers/nvmf_cmd.o 00:06:01.378 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:01.378 CXX test/cpp_headers/nvmf.o 00:06:01.378 CXX test/cpp_headers/nvmf_spec.o 00:06:01.378 CXX test/cpp_headers/nvmf_transport.o 00:06:01.378 CXX test/cpp_headers/opal.o 00:06:01.378 CXX test/cpp_headers/opal_spec.o 00:06:01.378 LINK nvmf 00:06:01.378 CXX test/cpp_headers/pci_ids.o 00:06:01.378 CXX test/cpp_headers/pipe.o 00:06:01.378 CXX test/cpp_headers/queue.o 00:06:01.637 CXX test/cpp_headers/reduce.o 00:06:01.637 CXX test/cpp_headers/rpc.o 00:06:01.637 CXX test/cpp_headers/scheduler.o 00:06:01.637 CXX test/cpp_headers/scsi.o 00:06:01.637 CXX test/cpp_headers/scsi_spec.o 00:06:01.637 CXX test/cpp_headers/sock.o 00:06:01.637 CXX test/cpp_headers/stdinc.o 00:06:01.637 CXX test/cpp_headers/string.o 00:06:01.637 CXX test/cpp_headers/thread.o 00:06:01.637 CXX test/cpp_headers/trace.o 00:06:01.637 CXX test/cpp_headers/trace_parser.o 00:06:01.896 CXX test/cpp_headers/tree.o 00:06:01.896 CXX test/cpp_headers/ublk.o 00:06:01.896 CXX test/cpp_headers/util.o 00:06:01.896 CXX test/cpp_headers/uuid.o 00:06:01.896 CXX test/cpp_headers/version.o 00:06:01.896 CXX test/cpp_headers/vfio_user_pci.o 00:06:01.896 CXX test/cpp_headers/vfio_user_spec.o 00:06:01.896 CXX test/cpp_headers/vhost.o 00:06:01.896 CXX test/cpp_headers/vmd.o 00:06:01.896 CXX test/cpp_headers/xor.o 00:06:01.896 CXX test/cpp_headers/zipf.o 00:06:01.896 LINK cuse 00:06:04.430 LINK esnap 00:06:04.687 00:06:04.687 real 1m26.024s 00:06:04.687 user 7m4.477s 00:06:04.687 sys 1m12.589s 00:06:04.687 01:27:35 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:04.687 01:27:35 make -- common/autotest_common.sh@10 -- $ set +x 00:06:04.687 ************************************ 00:06:04.687 END TEST make 00:06:04.687 ************************************ 00:06:04.687 01:27:35 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:04.687 01:27:35 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:04.687 01:27:35 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:04.687 01:27:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.687 01:27:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:04.687 01:27:35 -- pm/common@44 -- $ pid=6045 00:06:04.687 01:27:35 -- pm/common@50 -- $ kill -TERM 6045 00:06:04.687 01:27:35 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.687 01:27:35 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:04.687 01:27:35 -- pm/common@44 -- $ pid=6047 00:06:04.687 01:27:35 -- pm/common@50 -- $ kill -TERM 6047 00:06:04.687 01:27:35 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:04.687 01:27:35 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:04.687 01:27:35 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:04.687 01:27:35 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:04.687 01:27:35 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:04.947 01:27:35 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:04.947 01:27:35 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.947 01:27:35 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.947 01:27:35 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.947 01:27:35 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.947 01:27:35 -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.947 01:27:35 -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.947 01:27:35 -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.947 01:27:35 -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.947 01:27:35 -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.947 01:27:35 -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.947 01:27:35 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.947 01:27:35 -- scripts/common.sh@344 -- # case "$op" in 00:06:04.947 01:27:35 -- scripts/common.sh@345 -- # : 1 00:06:04.947 01:27:35 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.947 01:27:35 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.947 01:27:35 -- scripts/common.sh@365 -- # decimal 1 00:06:04.947 01:27:35 -- scripts/common.sh@353 -- # local d=1 00:06:04.947 01:27:35 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.947 01:27:35 -- scripts/common.sh@355 -- # echo 1 00:06:04.947 01:27:35 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.947 01:27:35 -- scripts/common.sh@366 -- # decimal 2 00:06:04.947 01:27:35 -- scripts/common.sh@353 -- # local d=2 00:06:04.947 01:27:35 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.947 01:27:35 -- scripts/common.sh@355 -- # echo 2 00:06:04.947 01:27:35 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.947 01:27:35 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.947 01:27:35 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.947 01:27:35 -- scripts/common.sh@368 -- # return 0 00:06:04.947 01:27:35 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.947 01:27:35 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.947 --rc genhtml_branch_coverage=1 00:06:04.947 --rc genhtml_function_coverage=1 00:06:04.947 --rc genhtml_legend=1 00:06:04.947 --rc geninfo_all_blocks=1 00:06:04.947 --rc geninfo_unexecuted_blocks=1 00:06:04.947 00:06:04.947 ' 00:06:04.947 01:27:35 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.947 --rc genhtml_branch_coverage=1 00:06:04.947 --rc genhtml_function_coverage=1 00:06:04.947 --rc genhtml_legend=1 00:06:04.947 --rc geninfo_all_blocks=1 00:06:04.947 --rc geninfo_unexecuted_blocks=1 00:06:04.947 00:06:04.947 ' 00:06:04.947 01:27:35 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.947 --rc genhtml_branch_coverage=1 00:06:04.947 --rc genhtml_function_coverage=1 00:06:04.947 --rc genhtml_legend=1 00:06:04.947 --rc geninfo_all_blocks=1 00:06:04.947 --rc geninfo_unexecuted_blocks=1 00:06:04.947 00:06:04.947 ' 00:06:04.947 01:27:35 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:04.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.947 --rc genhtml_branch_coverage=1 00:06:04.947 --rc genhtml_function_coverage=1 00:06:04.947 --rc genhtml_legend=1 00:06:04.947 --rc geninfo_all_blocks=1 00:06:04.947 --rc geninfo_unexecuted_blocks=1 00:06:04.947 00:06:04.947 ' 00:06:04.947 01:27:35 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:04.947 01:27:35 -- nvmf/common.sh@7 -- # uname -s 00:06:04.947 01:27:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.947 01:27:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.947 01:27:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.947 01:27:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.947 01:27:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.947 01:27:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.947 01:27:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.947 01:27:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.947 01:27:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.947 01:27:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.947 01:27:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:06:04.947 01:27:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:06:04.947 01:27:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.947 01:27:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.947 01:27:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:04.947 01:27:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:04.947 01:27:35 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:04.947 01:27:35 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:04.947 01:27:35 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.947 01:27:35 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.947 01:27:35 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.947 01:27:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.947 01:27:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.947 01:27:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.947 01:27:35 -- paths/export.sh@5 -- # export PATH 00:06:04.947 01:27:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.947 01:27:35 -- nvmf/common.sh@51 -- # : 0 00:06:04.947 01:27:35 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:04.947 01:27:35 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:04.947 01:27:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:04.947 01:27:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.947 01:27:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.947 01:27:35 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:04.947 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:04.947 01:27:35 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:04.947 01:27:35 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:04.947 01:27:35 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:04.947 01:27:35 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:04.947 01:27:35 -- spdk/autotest.sh@32 -- # uname -s 00:06:04.947 01:27:35 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:04.947 01:27:35 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:04.947 01:27:35 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:04.947 01:27:35 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:04.947 01:27:35 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:04.947 01:27:35 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:04.947 01:27:35 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:04.947 01:27:35 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:04.947 01:27:35 -- spdk/autotest.sh@48 -- # udevadm_pid=69394 00:06:04.947 01:27:35 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:04.947 01:27:35 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:04.947 01:27:35 -- pm/common@17 -- # local monitor 00:06:04.947 01:27:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.947 01:27:35 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:04.947 01:27:35 -- pm/common@25 -- # sleep 1 00:06:04.947 01:27:35 -- pm/common@21 -- # date +%s 00:06:04.947 01:27:35 -- pm/common@21 -- # date +%s 00:06:04.947 01:27:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734312455 00:06:04.947 01:27:35 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734312455 00:06:04.947 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734312455_collect-vmstat.pm.log 00:06:04.947 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734312455_collect-cpu-load.pm.log 00:06:05.895 01:27:36 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:05.895 01:27:36 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:05.895 01:27:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.895 01:27:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.895 01:27:36 -- spdk/autotest.sh@59 -- # create_test_list 00:06:05.895 01:27:36 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:05.895 01:27:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.168 01:27:36 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:06.168 01:27:36 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:06.168 01:27:36 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:06.168 01:27:36 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:06.168 01:27:36 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:06.168 01:27:36 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:06.168 01:27:36 -- common/autotest_common.sh@1457 -- # uname 00:06:06.168 01:27:36 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:06.168 01:27:36 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:06.168 01:27:36 -- common/autotest_common.sh@1477 -- # uname 00:06:06.168 01:27:36 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:06.168 01:27:36 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:06.168 01:27:36 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:06.168 lcov: LCOV version 1.15 00:06:06.168 01:27:36 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:06:21.048 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:21.048 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:39.132 01:28:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:39.132 01:28:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.132 01:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:39.132 01:28:06 -- spdk/autotest.sh@78 -- # rm -f 00:06:39.132 01:28:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:39.132 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:39.132 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:39.132 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:39.132 01:28:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:39.132 01:28:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:39.132 01:28:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:39.132 01:28:07 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:39.132 01:28:07 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:39.132 01:28:07 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:39.132 01:28:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:39.132 01:28:07 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:39.132 01:28:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.132 01:28:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:39.132 01:28:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:39.132 01:28:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:39.132 01:28:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.132 01:28:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:39.132 01:28:07 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:39.132 01:28:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.132 01:28:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:39.132 01:28:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:39.132 01:28:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:39.132 01:28:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.132 01:28:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.132 01:28:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:06:39.132 01:28:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:39.132 01:28:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:39.132 01:28:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.132 01:28:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:39.132 01:28:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:06:39.132 01:28:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:39.132 01:28:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:39.132 01:28:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:39.132 01:28:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:39.132 01:28:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.132 01:28:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:39.132 01:28:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:39.132 01:28:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:39.132 01:28:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:39.132 No valid GPT data, bailing 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # pt= 00:06:39.132 01:28:07 -- scripts/common.sh@395 -- # return 1 00:06:39.132 01:28:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:39.132 1+0 records in 00:06:39.132 1+0 records out 00:06:39.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591064 s, 177 MB/s 00:06:39.132 01:28:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.132 01:28:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:39.132 01:28:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:39.132 01:28:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:39.132 01:28:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:39.132 No valid GPT data, bailing 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # pt= 00:06:39.132 01:28:07 -- scripts/common.sh@395 -- # return 1 00:06:39.132 01:28:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:39.132 1+0 records in 00:06:39.132 1+0 records out 00:06:39.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476406 s, 220 MB/s 00:06:39.132 01:28:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.132 01:28:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:39.132 01:28:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:39.132 01:28:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:39.132 01:28:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:39.132 No valid GPT data, bailing 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # pt= 00:06:39.132 01:28:07 -- scripts/common.sh@395 -- # return 1 00:06:39.132 01:28:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:39.132 1+0 records in 00:06:39.132 1+0 records out 00:06:39.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446146 s, 235 MB/s 00:06:39.132 01:28:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:39.132 01:28:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:39.132 01:28:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:39.132 01:28:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:39.132 01:28:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:39.132 No valid GPT data, bailing 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:39.132 01:28:07 -- scripts/common.sh@394 -- # pt= 00:06:39.132 01:28:07 -- scripts/common.sh@395 -- # return 1 00:06:39.132 01:28:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:39.132 1+0 records in 00:06:39.132 1+0 records out 00:06:39.132 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458634 s, 229 MB/s 00:06:39.132 01:28:07 -- spdk/autotest.sh@105 -- # sync 00:06:39.132 01:28:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:39.132 01:28:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:39.132 01:28:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:39.701 01:28:10 -- spdk/autotest.sh@111 -- # uname -s 00:06:39.701 01:28:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:39.701 01:28:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:39.701 01:28:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:40.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:40.269 Hugepages 00:06:40.269 node hugesize free / total 00:06:40.269 node0 1048576kB 0 / 0 00:06:40.269 node0 2048kB 0 / 0 00:06:40.269 00:06:40.269 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:40.269 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:40.528 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:40.528 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:40.528 01:28:11 -- spdk/autotest.sh@117 -- # uname -s 00:06:40.528 01:28:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:40.528 01:28:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:40.528 01:28:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:41.096 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:41.355 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:41.355 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:41.355 01:28:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:42.292 01:28:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:42.292 01:28:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:42.292 01:28:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:42.292 01:28:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:42.292 01:28:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:42.292 01:28:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:42.292 01:28:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:42.292 01:28:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:42.292 01:28:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:42.292 01:28:12 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:42.292 01:28:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:42.292 01:28:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:42.859 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:42.859 Waiting for block devices as requested 00:06:42.859 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:42.859 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:43.118 01:28:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:43.118 01:28:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:43.118 01:28:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:43.118 01:28:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:43.118 01:28:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:43.118 01:28:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:43.119 01:28:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:43.119 01:28:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:43.119 01:28:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:43.119 01:28:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:43.119 01:28:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:43.119 01:28:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1543 -- # continue 00:06:43.119 01:28:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:43.119 01:28:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:43.119 01:28:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:43.119 01:28:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:43.119 01:28:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:43.119 01:28:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:43.119 01:28:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:43.119 01:28:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:43.119 01:28:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:43.119 01:28:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:43.119 01:28:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:43.119 01:28:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:43.119 01:28:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:43.119 01:28:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:43.119 01:28:13 -- common/autotest_common.sh@1543 -- # continue 00:06:43.119 01:28:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:43.119 01:28:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.119 01:28:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.119 01:28:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:43.119 01:28:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.119 01:28:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.119 01:28:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:43.687 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:43.946 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.946 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:43.946 01:28:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:43.946 01:28:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.946 01:28:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.946 01:28:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:43.946 01:28:14 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:43.946 01:28:14 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:43.947 01:28:14 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:43.947 01:28:14 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:43.947 01:28:14 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:43.947 01:28:14 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:43.947 01:28:14 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:43.947 01:28:14 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:43.947 01:28:14 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:43.947 01:28:14 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:43.947 01:28:14 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:43.947 01:28:14 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:43.947 01:28:14 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:43.947 01:28:14 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:43.947 01:28:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:43.947 01:28:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:43.947 01:28:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:43.947 01:28:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:43.947 01:28:14 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:43.947 01:28:14 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:44.206 01:28:14 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:44.206 01:28:14 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:44.206 01:28:14 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:44.206 01:28:14 -- common/autotest_common.sh@1572 -- # return 0 00:06:44.206 01:28:14 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:44.206 01:28:14 -- common/autotest_common.sh@1580 -- # return 0 00:06:44.206 01:28:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:44.206 01:28:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:44.206 01:28:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:44.206 01:28:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:44.206 01:28:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:44.206 01:28:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.206 01:28:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 01:28:14 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:44.206 01:28:14 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:44.206 01:28:14 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:44.206 01:28:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:44.206 01:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.206 01:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.206 01:28:14 -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 ************************************ 00:06:44.206 START TEST env 00:06:44.206 ************************************ 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:44.206 * Looking for test storage... 00:06:44.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.206 01:28:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.206 01:28:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.206 01:28:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.206 01:28:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.206 01:28:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.206 01:28:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.206 01:28:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.206 01:28:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.206 01:28:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.206 01:28:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.206 01:28:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.206 01:28:14 env -- scripts/common.sh@344 -- # case "$op" in 00:06:44.206 01:28:14 env -- scripts/common.sh@345 -- # : 1 00:06:44.206 01:28:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.206 01:28:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.206 01:28:14 env -- scripts/common.sh@365 -- # decimal 1 00:06:44.206 01:28:14 env -- scripts/common.sh@353 -- # local d=1 00:06:44.206 01:28:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.206 01:28:14 env -- scripts/common.sh@355 -- # echo 1 00:06:44.206 01:28:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.206 01:28:14 env -- scripts/common.sh@366 -- # decimal 2 00:06:44.206 01:28:14 env -- scripts/common.sh@353 -- # local d=2 00:06:44.206 01:28:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.206 01:28:14 env -- scripts/common.sh@355 -- # echo 2 00:06:44.206 01:28:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.206 01:28:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.206 01:28:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.206 01:28:14 env -- scripts/common.sh@368 -- # return 0 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.206 --rc genhtml_branch_coverage=1 00:06:44.206 --rc genhtml_function_coverage=1 00:06:44.206 --rc genhtml_legend=1 00:06:44.206 --rc geninfo_all_blocks=1 00:06:44.206 --rc geninfo_unexecuted_blocks=1 00:06:44.206 00:06:44.206 ' 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.206 --rc genhtml_branch_coverage=1 00:06:44.206 --rc genhtml_function_coverage=1 00:06:44.206 --rc genhtml_legend=1 00:06:44.206 --rc geninfo_all_blocks=1 00:06:44.206 --rc geninfo_unexecuted_blocks=1 00:06:44.206 00:06:44.206 ' 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.206 --rc genhtml_branch_coverage=1 00:06:44.206 --rc genhtml_function_coverage=1 00:06:44.206 --rc genhtml_legend=1 00:06:44.206 --rc geninfo_all_blocks=1 00:06:44.206 --rc geninfo_unexecuted_blocks=1 00:06:44.206 00:06:44.206 ' 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.206 --rc genhtml_branch_coverage=1 00:06:44.206 --rc genhtml_function_coverage=1 00:06:44.206 --rc genhtml_legend=1 00:06:44.206 --rc geninfo_all_blocks=1 00:06:44.206 --rc geninfo_unexecuted_blocks=1 00:06:44.206 00:06:44.206 ' 00:06:44.206 01:28:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.206 01:28:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.206 01:28:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.206 ************************************ 00:06:44.206 START TEST env_memory 00:06:44.206 ************************************ 00:06:44.206 01:28:14 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:44.206 00:06:44.206 00:06:44.206 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.206 http://cunit.sourceforge.net/ 00:06:44.206 00:06:44.206 00:06:44.206 Suite: memory 00:06:44.465 Test: alloc and free memory map ...[2024-12-16 01:28:14.876176] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:44.465 passed 00:06:44.465 Test: mem map translation ...[2024-12-16 01:28:14.907551] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:44.465 [2024-12-16 01:28:14.907718] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:44.465 [2024-12-16 01:28:14.907870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:44.466 [2024-12-16 01:28:14.907952] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:44.466 passed 00:06:44.466 Test: mem map registration ...[2024-12-16 01:28:14.971781] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:44.466 [2024-12-16 01:28:14.971929] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:44.466 passed 00:06:44.466 Test: mem map adjacent registrations ...passed 00:06:44.466 00:06:44.466 Run Summary: Type Total Ran Passed Failed Inactive 00:06:44.466 suites 1 1 n/a 0 0 00:06:44.466 tests 4 4 4 0 0 00:06:44.466 asserts 152 152 152 0 n/a 00:06:44.466 00:06:44.466 Elapsed time = 0.214 seconds 00:06:44.466 00:06:44.466 real 0m0.233s 00:06:44.466 user 0m0.216s 00:06:44.466 sys 0m0.012s 00:06:44.466 01:28:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.466 01:28:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:44.466 ************************************ 00:06:44.466 END TEST env_memory 00:06:44.466 ************************************ 00:06:44.466 01:28:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:44.466 01:28:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.466 01:28:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.466 01:28:15 env -- common/autotest_common.sh@10 -- # set +x 00:06:44.466 ************************************ 00:06:44.466 START TEST env_vtophys 00:06:44.466 ************************************ 00:06:44.466 01:28:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:44.725 EAL: lib.eal log level changed from notice to debug 00:06:44.725 EAL: Detected lcore 0 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 1 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 2 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 3 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 4 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 5 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 6 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 7 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 8 as core 0 on socket 0 00:06:44.725 EAL: Detected lcore 9 as core 0 on socket 0 00:06:44.725 EAL: Maximum logical cores by configuration: 128 00:06:44.725 EAL: Detected CPU lcores: 10 00:06:44.725 EAL: Detected NUMA nodes: 1 00:06:44.725 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:44.725 EAL: Detected shared linkage of DPDK 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:44.725 EAL: Registered [vdev] bus. 00:06:44.725 EAL: bus.vdev log level changed from disabled to notice 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:44.725 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:44.725 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:44.725 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:44.725 EAL: No shared files mode enabled, IPC will be disabled 00:06:44.725 EAL: No shared files mode enabled, IPC is disabled 00:06:44.725 EAL: Selected IOVA mode 'PA' 00:06:44.725 EAL: Probing VFIO support... 00:06:44.725 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:44.725 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:44.725 EAL: Ask a virtual area of 0x2e000 bytes 00:06:44.725 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:44.725 EAL: Setting up physically contiguous memory... 00:06:44.725 EAL: Setting maximum number of open files to 524288 00:06:44.725 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:44.725 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:44.725 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.725 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:44.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.725 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.725 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:44.725 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:44.725 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.725 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:44.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.725 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.725 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:44.725 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:44.725 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.725 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:44.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.725 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.725 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:44.725 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:44.725 EAL: Ask a virtual area of 0x61000 bytes 00:06:44.725 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:44.725 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:44.725 EAL: Ask a virtual area of 0x400000000 bytes 00:06:44.725 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:44.725 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:44.725 EAL: Hugepages will be freed exactly as allocated. 00:06:44.725 EAL: No shared files mode enabled, IPC is disabled 00:06:44.725 EAL: No shared files mode enabled, IPC is disabled 00:06:44.725 EAL: TSC frequency is ~2200000 KHz 00:06:44.725 EAL: Main lcore 0 is ready (tid=7ff7d0bf7a00;cpuset=[0]) 00:06:44.725 EAL: Trying to obtain current memory policy. 00:06:44.725 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.725 EAL: Restoring previous memory policy: 0 00:06:44.725 EAL: request: mp_malloc_sync 00:06:44.725 EAL: No shared files mode enabled, IPC is disabled 00:06:44.725 EAL: Heap on socket 0 was expanded by 2MB 00:06:44.726 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:44.726 EAL: Mem event callback 'spdk:(nil)' registered 00:06:44.726 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:44.726 00:06:44.726 00:06:44.726 CUnit - A unit testing framework for C - Version 2.1-3 00:06:44.726 http://cunit.sourceforge.net/ 00:06:44.726 00:06:44.726 00:06:44.726 Suite: components_suite 00:06:44.726 Test: vtophys_malloc_test ...passed 00:06:44.726 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.726 EAL: Restoring previous memory policy: 4 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was expanded by 4MB 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was shrunk by 4MB 00:06:44.726 EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.726 EAL: Restoring previous memory policy: 4 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was expanded by 6MB 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was shrunk by 6MB 00:06:44.726 EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.726 EAL: Restoring previous memory policy: 4 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was expanded by 10MB 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was shrunk by 10MB 00:06:44.726 EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.726 EAL: Restoring previous memory policy: 4 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was expanded by 18MB 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was shrunk by 18MB 00:06:44.726 EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.726 EAL: Restoring previous memory policy: 4 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was expanded by 34MB 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was shrunk by 34MB 00:06:44.726 EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.726 EAL: Restoring previous memory policy: 4 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was expanded by 66MB 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was shrunk by 66MB 00:06:44.726 EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.726 EAL: Restoring previous memory policy: 4 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was expanded by 130MB 00:06:44.726 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.726 EAL: request: mp_malloc_sync 00:06:44.726 EAL: No shared files mode enabled, IPC is disabled 00:06:44.726 EAL: Heap on socket 0 was shrunk by 130MB 00:06:44.726 EAL: Trying to obtain current memory policy. 00:06:44.726 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.985 EAL: Restoring previous memory policy: 4 00:06:44.985 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.985 EAL: request: mp_malloc_sync 00:06:44.985 EAL: No shared files mode enabled, IPC is disabled 00:06:44.985 EAL: Heap on socket 0 was expanded by 258MB 00:06:44.985 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.985 EAL: request: mp_malloc_sync 00:06:44.985 EAL: No shared files mode enabled, IPC is disabled 00:06:44.985 EAL: Heap on socket 0 was shrunk by 258MB 00:06:44.985 EAL: Trying to obtain current memory policy. 00:06:44.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:44.985 EAL: Restoring previous memory policy: 4 00:06:44.985 EAL: Calling mem event callback 'spdk:(nil)' 00:06:44.985 EAL: request: mp_malloc_sync 00:06:44.985 EAL: No shared files mode enabled, IPC is disabled 00:06:44.985 EAL: Heap on socket 0 was expanded by 514MB 00:06:44.985 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.245 EAL: request: mp_malloc_sync 00:06:45.245 EAL: No shared files mode enabled, IPC is disabled 00:06:45.245 EAL: Heap on socket 0 was shrunk by 514MB 00:06:45.245 EAL: Trying to obtain current memory policy. 00:06:45.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:45.245 EAL: Restoring previous memory policy: 4 00:06:45.245 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.245 EAL: request: mp_malloc_sync 00:06:45.245 EAL: No shared files mode enabled, IPC is disabled 00:06:45.245 EAL: Heap on socket 0 was expanded by 1026MB 00:06:45.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.504 passed 00:06:45.504 00:06:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.504 suites 1 1 n/a 0 0 00:06:45.504 tests 2 2 2 0 0 00:06:45.504 asserts 5820 5820 5820 0 n/a 00:06:45.504 00:06:45.504 Elapsed time = 0.708 seconds 00:06:45.504 EAL: request: mp_malloc_sync 00:06:45.504 EAL: No shared files mode enabled, IPC is disabled 00:06:45.504 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:45.504 EAL: Calling mem event callback 'spdk:(nil)' 00:06:45.504 EAL: request: mp_malloc_sync 00:06:45.504 EAL: No shared files mode enabled, IPC is disabled 00:06:45.504 EAL: Heap on socket 0 was shrunk by 2MB 00:06:45.504 EAL: No shared files mode enabled, IPC is disabled 00:06:45.504 EAL: No shared files mode enabled, IPC is disabled 00:06:45.504 EAL: No shared files mode enabled, IPC is disabled 00:06:45.504 00:06:45.504 real 0m0.913s 00:06:45.504 user 0m0.479s 00:06:45.504 sys 0m0.304s 00:06:45.504 01:28:16 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.504 01:28:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:45.504 ************************************ 00:06:45.504 END TEST env_vtophys 00:06:45.504 ************************************ 00:06:45.504 01:28:16 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:45.504 01:28:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.504 01:28:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.504 01:28:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.504 ************************************ 00:06:45.504 START TEST env_pci 00:06:45.504 ************************************ 00:06:45.504 01:28:16 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:45.504 00:06:45.504 00:06:45.504 CUnit - A unit testing framework for C - Version 2.1-3 00:06:45.504 http://cunit.sourceforge.net/ 00:06:45.504 00:06:45.504 00:06:45.504 Suite: pci 00:06:45.504 Test: pci_hook ...[2024-12-16 01:28:16.102002] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 71642 has claimed it 00:06:45.504 passed 00:06:45.504 00:06:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:06:45.504 suites 1 1 n/a 0 0 00:06:45.504 tests 1 1 1 0 0 00:06:45.504 asserts 25 25 25 0 n/a 00:06:45.504 00:06:45.504 Elapsed time = 0.002 seconds 00:06:45.504 EAL: Cannot find device (10000:00:01.0) 00:06:45.504 EAL: Failed to attach device on primary process 00:06:45.504 00:06:45.504 real 0m0.022s 00:06:45.504 user 0m0.008s 00:06:45.504 sys 0m0.014s 00:06:45.504 01:28:16 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.504 01:28:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:45.504 ************************************ 00:06:45.504 END TEST env_pci 00:06:45.504 ************************************ 00:06:45.504 01:28:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:45.504 01:28:16 env -- env/env.sh@15 -- # uname 00:06:45.504 01:28:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:45.504 01:28:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:45.504 01:28:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.504 01:28:16 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:45.504 01:28:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.504 01:28:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:45.763 ************************************ 00:06:45.763 START TEST env_dpdk_post_init 00:06:45.763 ************************************ 00:06:45.763 01:28:16 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:45.763 EAL: Detected CPU lcores: 10 00:06:45.763 EAL: Detected NUMA nodes: 1 00:06:45.763 EAL: Detected shared linkage of DPDK 00:06:45.763 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:45.763 EAL: Selected IOVA mode 'PA' 00:06:45.763 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:45.764 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:45.764 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:45.764 Starting DPDK initialization... 00:06:45.764 Starting SPDK post initialization... 00:06:45.764 SPDK NVMe probe 00:06:45.764 Attaching to 0000:00:10.0 00:06:45.764 Attaching to 0000:00:11.0 00:06:45.764 Attached to 0000:00:10.0 00:06:45.764 Attached to 0000:00:11.0 00:06:45.764 Cleaning up... 00:06:45.764 ************************************ 00:06:45.764 END TEST env_dpdk_post_init 00:06:45.764 ************************************ 00:06:45.764 00:06:45.764 real 0m0.189s 00:06:45.764 user 0m0.053s 00:06:45.764 sys 0m0.036s 00:06:45.764 01:28:16 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.764 01:28:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:45.764 01:28:16 env -- env/env.sh@26 -- # uname 00:06:45.764 01:28:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:45.764 01:28:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:45.764 01:28:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.764 01:28:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.764 01:28:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.023 ************************************ 00:06:46.023 START TEST env_mem_callbacks 00:06:46.023 ************************************ 00:06:46.023 01:28:16 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:46.023 EAL: Detected CPU lcores: 10 00:06:46.023 EAL: Detected NUMA nodes: 1 00:06:46.023 EAL: Detected shared linkage of DPDK 00:06:46.023 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:46.023 EAL: Selected IOVA mode 'PA' 00:06:46.023 00:06:46.023 00:06:46.023 CUnit - A unit testing framework for C - Version 2.1-3 00:06:46.023 http://cunit.sourceforge.net/ 00:06:46.023 00:06:46.023 00:06:46.023 Suite: memory 00:06:46.023 Test: test ... 00:06:46.023 register 0x200000200000 2097152 00:06:46.023 malloc 3145728 00:06:46.023 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:46.023 register 0x200000400000 4194304 00:06:46.023 buf 0x200000500000 len 3145728 PASSED 00:06:46.023 malloc 64 00:06:46.023 buf 0x2000004fff40 len 64 PASSED 00:06:46.023 malloc 4194304 00:06:46.023 register 0x200000800000 6291456 00:06:46.023 buf 0x200000a00000 len 4194304 PASSED 00:06:46.023 free 0x200000500000 3145728 00:06:46.023 free 0x2000004fff40 64 00:06:46.023 unregister 0x200000400000 4194304 PASSED 00:06:46.023 free 0x200000a00000 4194304 00:06:46.023 unregister 0x200000800000 6291456 PASSED 00:06:46.023 malloc 8388608 00:06:46.023 register 0x200000400000 10485760 00:06:46.023 buf 0x200000600000 len 8388608 PASSED 00:06:46.023 free 0x200000600000 8388608 00:06:46.023 unregister 0x200000400000 10485760 PASSED 00:06:46.023 passed 00:06:46.023 00:06:46.023 Run Summary: Type Total Ran Passed Failed Inactive 00:06:46.023 suites 1 1 n/a 0 0 00:06:46.023 tests 1 1 1 0 0 00:06:46.023 asserts 15 15 15 0 n/a 00:06:46.023 00:06:46.023 Elapsed time = 0.006 seconds 00:06:46.023 ************************************ 00:06:46.023 END TEST env_mem_callbacks 00:06:46.023 ************************************ 00:06:46.023 00:06:46.023 real 0m0.141s 00:06:46.023 user 0m0.017s 00:06:46.023 sys 0m0.022s 00:06:46.023 01:28:16 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.023 01:28:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:46.023 ************************************ 00:06:46.023 END TEST env 00:06:46.023 ************************************ 00:06:46.023 00:06:46.023 real 0m1.983s 00:06:46.023 user 0m0.974s 00:06:46.023 sys 0m0.647s 00:06:46.023 01:28:16 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.023 01:28:16 env -- common/autotest_common.sh@10 -- # set +x 00:06:46.023 01:28:16 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:46.023 01:28:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.023 01:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.023 01:28:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.023 ************************************ 00:06:46.023 START TEST rpc 00:06:46.023 ************************************ 00:06:46.023 01:28:16 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:46.283 * Looking for test storage... 00:06:46.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.283 01:28:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.283 01:28:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.283 01:28:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.283 01:28:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.283 01:28:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.283 01:28:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.283 01:28:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.283 01:28:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:46.283 01:28:16 rpc -- scripts/common.sh@345 -- # : 1 00:06:46.283 01:28:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.283 01:28:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.283 01:28:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:46.283 01:28:16 rpc -- scripts/common.sh@353 -- # local d=1 00:06:46.283 01:28:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.283 01:28:16 rpc -- scripts/common.sh@355 -- # echo 1 00:06:46.283 01:28:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.283 01:28:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@353 -- # local d=2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.283 01:28:16 rpc -- scripts/common.sh@355 -- # echo 2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.283 01:28:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.283 01:28:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.283 01:28:16 rpc -- scripts/common.sh@368 -- # return 0 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:46.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.283 --rc genhtml_branch_coverage=1 00:06:46.283 --rc genhtml_function_coverage=1 00:06:46.283 --rc genhtml_legend=1 00:06:46.283 --rc geninfo_all_blocks=1 00:06:46.283 --rc geninfo_unexecuted_blocks=1 00:06:46.283 00:06:46.283 ' 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:46.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.283 --rc genhtml_branch_coverage=1 00:06:46.283 --rc genhtml_function_coverage=1 00:06:46.283 --rc genhtml_legend=1 00:06:46.283 --rc geninfo_all_blocks=1 00:06:46.283 --rc geninfo_unexecuted_blocks=1 00:06:46.283 00:06:46.283 ' 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:46.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.283 --rc genhtml_branch_coverage=1 00:06:46.283 --rc genhtml_function_coverage=1 00:06:46.283 --rc genhtml_legend=1 00:06:46.283 --rc geninfo_all_blocks=1 00:06:46.283 --rc geninfo_unexecuted_blocks=1 00:06:46.283 00:06:46.283 ' 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:46.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.283 --rc genhtml_branch_coverage=1 00:06:46.283 --rc genhtml_function_coverage=1 00:06:46.283 --rc genhtml_legend=1 00:06:46.283 --rc geninfo_all_blocks=1 00:06:46.283 --rc geninfo_unexecuted_blocks=1 00:06:46.283 00:06:46.283 ' 00:06:46.283 01:28:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=71765 00:06:46.283 01:28:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.283 01:28:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 71765 00:06:46.283 01:28:16 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@835 -- # '[' -z 71765 ']' 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.283 01:28:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.283 [2024-12-16 01:28:16.912814] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:46.283 [2024-12-16 01:28:16.913128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71765 ] 00:06:46.542 [2024-12-16 01:28:17.056384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.542 [2024-12-16 01:28:17.075899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:46.542 [2024-12-16 01:28:17.076194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 71765' to capture a snapshot of events at runtime. 00:06:46.542 [2024-12-16 01:28:17.076371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.542 [2024-12-16 01:28:17.076426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.542 [2024-12-16 01:28:17.076544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid71765 for offline analysis/debug. 00:06:46.542 [2024-12-16 01:28:17.076903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.542 [2024-12-16 01:28:17.111903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.802 01:28:17 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.802 01:28:17 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.802 01:28:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:46.802 01:28:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:46.802 01:28:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:46.802 01:28:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:46.802 01:28:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.802 01:28:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.802 01:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.802 ************************************ 00:06:46.802 START TEST rpc_integrity 00:06:46.802 ************************************ 00:06:46.802 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:46.802 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:46.802 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.802 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.802 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.802 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:46.802 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:46.803 { 00:06:46.803 "name": "Malloc0", 00:06:46.803 "aliases": [ 00:06:46.803 "c468b6c7-aaf9-45d5-b6e5-36795fe32b7d" 00:06:46.803 ], 00:06:46.803 "product_name": "Malloc disk", 00:06:46.803 "block_size": 512, 00:06:46.803 "num_blocks": 16384, 00:06:46.803 "uuid": "c468b6c7-aaf9-45d5-b6e5-36795fe32b7d", 00:06:46.803 "assigned_rate_limits": { 00:06:46.803 "rw_ios_per_sec": 0, 00:06:46.803 "rw_mbytes_per_sec": 0, 00:06:46.803 "r_mbytes_per_sec": 0, 00:06:46.803 "w_mbytes_per_sec": 0 00:06:46.803 }, 00:06:46.803 "claimed": false, 00:06:46.803 "zoned": false, 00:06:46.803 "supported_io_types": { 00:06:46.803 "read": true, 00:06:46.803 "write": true, 00:06:46.803 "unmap": true, 00:06:46.803 "flush": true, 00:06:46.803 "reset": true, 00:06:46.803 "nvme_admin": false, 00:06:46.803 "nvme_io": false, 00:06:46.803 "nvme_io_md": false, 00:06:46.803 "write_zeroes": true, 00:06:46.803 "zcopy": true, 00:06:46.803 "get_zone_info": false, 00:06:46.803 "zone_management": false, 00:06:46.803 "zone_append": false, 00:06:46.803 "compare": false, 00:06:46.803 "compare_and_write": false, 00:06:46.803 "abort": true, 00:06:46.803 "seek_hole": false, 00:06:46.803 "seek_data": false, 00:06:46.803 "copy": true, 00:06:46.803 "nvme_iov_md": false 00:06:46.803 }, 00:06:46.803 "memory_domains": [ 00:06:46.803 { 00:06:46.803 "dma_device_id": "system", 00:06:46.803 "dma_device_type": 1 00:06:46.803 }, 00:06:46.803 { 00:06:46.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.803 "dma_device_type": 2 00:06:46.803 } 00:06:46.803 ], 00:06:46.803 "driver_specific": {} 00:06:46.803 } 00:06:46.803 ]' 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.803 [2024-12-16 01:28:17.402502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:46.803 [2024-12-16 01:28:17.402594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:46.803 [2024-12-16 01:28:17.402614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12305a0 00:06:46.803 [2024-12-16 01:28:17.402623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:46.803 [2024-12-16 01:28:17.404224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:46.803 [2024-12-16 01:28:17.404260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:46.803 Passthru0 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:46.803 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:46.803 { 00:06:46.803 "name": "Malloc0", 00:06:46.803 "aliases": [ 00:06:46.803 "c468b6c7-aaf9-45d5-b6e5-36795fe32b7d" 00:06:46.803 ], 00:06:46.803 "product_name": "Malloc disk", 00:06:46.803 "block_size": 512, 00:06:46.803 "num_blocks": 16384, 00:06:46.803 "uuid": "c468b6c7-aaf9-45d5-b6e5-36795fe32b7d", 00:06:46.803 "assigned_rate_limits": { 00:06:46.803 "rw_ios_per_sec": 0, 00:06:46.803 "rw_mbytes_per_sec": 0, 00:06:46.803 "r_mbytes_per_sec": 0, 00:06:46.803 "w_mbytes_per_sec": 0 00:06:46.803 }, 00:06:46.803 "claimed": true, 00:06:46.803 "claim_type": "exclusive_write", 00:06:46.803 "zoned": false, 00:06:46.803 "supported_io_types": { 00:06:46.803 "read": true, 00:06:46.803 "write": true, 00:06:46.803 "unmap": true, 00:06:46.803 "flush": true, 00:06:46.803 "reset": true, 00:06:46.803 "nvme_admin": false, 00:06:46.803 "nvme_io": false, 00:06:46.803 "nvme_io_md": false, 00:06:46.803 "write_zeroes": true, 00:06:46.803 "zcopy": true, 00:06:46.803 "get_zone_info": false, 00:06:46.803 "zone_management": false, 00:06:46.803 "zone_append": false, 00:06:46.803 "compare": false, 00:06:46.803 "compare_and_write": false, 00:06:46.803 "abort": true, 00:06:46.803 "seek_hole": false, 00:06:46.803 "seek_data": false, 00:06:46.803 "copy": true, 00:06:46.803 "nvme_iov_md": false 00:06:46.803 }, 00:06:46.803 "memory_domains": [ 00:06:46.803 { 00:06:46.803 "dma_device_id": "system", 00:06:46.803 "dma_device_type": 1 00:06:46.803 }, 00:06:46.803 { 00:06:46.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.803 "dma_device_type": 2 00:06:46.803 } 00:06:46.803 ], 00:06:46.803 "driver_specific": {} 00:06:46.803 }, 00:06:46.803 { 00:06:46.803 "name": "Passthru0", 00:06:46.803 "aliases": [ 00:06:46.803 "83218038-e268-59e9-adea-5cdf178d5df0" 00:06:46.803 ], 00:06:46.803 "product_name": "passthru", 00:06:46.803 "block_size": 512, 00:06:46.803 "num_blocks": 16384, 00:06:46.803 "uuid": "83218038-e268-59e9-adea-5cdf178d5df0", 00:06:46.803 "assigned_rate_limits": { 00:06:46.803 "rw_ios_per_sec": 0, 00:06:46.803 "rw_mbytes_per_sec": 0, 00:06:46.803 "r_mbytes_per_sec": 0, 00:06:46.803 "w_mbytes_per_sec": 0 00:06:46.803 }, 00:06:46.803 "claimed": false, 00:06:46.803 "zoned": false, 00:06:46.803 "supported_io_types": { 00:06:46.803 "read": true, 00:06:46.803 "write": true, 00:06:46.803 "unmap": true, 00:06:46.803 "flush": true, 00:06:46.803 "reset": true, 00:06:46.803 "nvme_admin": false, 00:06:46.803 "nvme_io": false, 00:06:46.803 "nvme_io_md": false, 00:06:46.803 "write_zeroes": true, 00:06:46.803 "zcopy": true, 00:06:46.803 "get_zone_info": false, 00:06:46.803 "zone_management": false, 00:06:46.803 "zone_append": false, 00:06:46.803 "compare": false, 00:06:46.803 "compare_and_write": false, 00:06:46.803 "abort": true, 00:06:46.803 "seek_hole": false, 00:06:46.803 "seek_data": false, 00:06:46.803 "copy": true, 00:06:46.803 "nvme_iov_md": false 00:06:46.803 }, 00:06:46.803 "memory_domains": [ 00:06:46.803 { 00:06:46.803 "dma_device_id": "system", 00:06:46.803 "dma_device_type": 1 00:06:46.803 }, 00:06:46.803 { 00:06:46.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.803 "dma_device_type": 2 00:06:46.803 } 00:06:46.803 ], 00:06:46.803 "driver_specific": { 00:06:46.803 "passthru": { 00:06:46.803 "name": "Passthru0", 00:06:46.803 "base_bdev_name": "Malloc0" 00:06:46.803 } 00:06:46.803 } 00:06:46.803 } 00:06:46.803 ]' 00:06:46.803 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:47.062 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.062 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.062 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.062 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.062 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.062 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:47.062 ************************************ 00:06:47.062 END TEST rpc_integrity 00:06:47.062 ************************************ 00:06:47.062 01:28:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.062 00:06:47.062 real 0m0.327s 00:06:47.062 user 0m0.217s 00:06:47.062 sys 0m0.040s 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.062 01:28:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 01:28:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:47.062 01:28:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.062 01:28:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.062 01:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 ************************************ 00:06:47.062 START TEST rpc_plugins 00:06:47.062 ************************************ 00:06:47.062 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:47.062 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:47.062 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.062 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.062 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:47.062 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:47.062 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.062 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.062 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:47.062 { 00:06:47.062 "name": "Malloc1", 00:06:47.062 "aliases": [ 00:06:47.062 "15afcbfb-51f8-41d8-8dae-1b5fed9a9cde" 00:06:47.062 ], 00:06:47.062 "product_name": "Malloc disk", 00:06:47.062 "block_size": 4096, 00:06:47.062 "num_blocks": 256, 00:06:47.062 "uuid": "15afcbfb-51f8-41d8-8dae-1b5fed9a9cde", 00:06:47.062 "assigned_rate_limits": { 00:06:47.062 "rw_ios_per_sec": 0, 00:06:47.063 "rw_mbytes_per_sec": 0, 00:06:47.063 "r_mbytes_per_sec": 0, 00:06:47.063 "w_mbytes_per_sec": 0 00:06:47.063 }, 00:06:47.063 "claimed": false, 00:06:47.063 "zoned": false, 00:06:47.063 "supported_io_types": { 00:06:47.063 "read": true, 00:06:47.063 "write": true, 00:06:47.063 "unmap": true, 00:06:47.063 "flush": true, 00:06:47.063 "reset": true, 00:06:47.063 "nvme_admin": false, 00:06:47.063 "nvme_io": false, 00:06:47.063 "nvme_io_md": false, 00:06:47.063 "write_zeroes": true, 00:06:47.063 "zcopy": true, 00:06:47.063 "get_zone_info": false, 00:06:47.063 "zone_management": false, 00:06:47.063 "zone_append": false, 00:06:47.063 "compare": false, 00:06:47.063 "compare_and_write": false, 00:06:47.063 "abort": true, 00:06:47.063 "seek_hole": false, 00:06:47.063 "seek_data": false, 00:06:47.063 "copy": true, 00:06:47.063 "nvme_iov_md": false 00:06:47.063 }, 00:06:47.063 "memory_domains": [ 00:06:47.063 { 00:06:47.063 "dma_device_id": "system", 00:06:47.063 "dma_device_type": 1 00:06:47.063 }, 00:06:47.063 { 00:06:47.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.063 "dma_device_type": 2 00:06:47.063 } 00:06:47.063 ], 00:06:47.063 "driver_specific": {} 00:06:47.063 } 00:06:47.063 ]' 00:06:47.063 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:47.322 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:47.322 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.322 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.322 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:47.322 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:47.322 ************************************ 00:06:47.322 END TEST rpc_plugins 00:06:47.322 ************************************ 00:06:47.322 01:28:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:47.322 00:06:47.322 real 0m0.165s 00:06:47.322 user 0m0.107s 00:06:47.322 sys 0m0.019s 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.322 01:28:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:47.322 01:28:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:47.322 01:28:17 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.322 01:28:17 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.322 01:28:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.322 ************************************ 00:06:47.322 START TEST rpc_trace_cmd_test 00:06:47.322 ************************************ 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:47.322 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid71765", 00:06:47.322 "tpoint_group_mask": "0x8", 00:06:47.322 "iscsi_conn": { 00:06:47.322 "mask": "0x2", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "scsi": { 00:06:47.322 "mask": "0x4", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "bdev": { 00:06:47.322 "mask": "0x8", 00:06:47.322 "tpoint_mask": "0xffffffffffffffff" 00:06:47.322 }, 00:06:47.322 "nvmf_rdma": { 00:06:47.322 "mask": "0x10", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "nvmf_tcp": { 00:06:47.322 "mask": "0x20", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "ftl": { 00:06:47.322 "mask": "0x40", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "blobfs": { 00:06:47.322 "mask": "0x80", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "dsa": { 00:06:47.322 "mask": "0x200", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "thread": { 00:06:47.322 "mask": "0x400", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "nvme_pcie": { 00:06:47.322 "mask": "0x800", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "iaa": { 00:06:47.322 "mask": "0x1000", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "nvme_tcp": { 00:06:47.322 "mask": "0x2000", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "bdev_nvme": { 00:06:47.322 "mask": "0x4000", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "sock": { 00:06:47.322 "mask": "0x8000", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "blob": { 00:06:47.322 "mask": "0x10000", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "bdev_raid": { 00:06:47.322 "mask": "0x20000", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 }, 00:06:47.322 "scheduler": { 00:06:47.322 "mask": "0x40000", 00:06:47.322 "tpoint_mask": "0x0" 00:06:47.322 } 00:06:47.322 }' 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:47.322 01:28:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:47.582 01:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:47.582 01:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:47.582 01:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:47.582 01:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:47.582 ************************************ 00:06:47.582 END TEST rpc_trace_cmd_test 00:06:47.582 ************************************ 00:06:47.582 01:28:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:47.582 00:06:47.582 real 0m0.277s 00:06:47.582 user 0m0.242s 00:06:47.582 sys 0m0.027s 00:06:47.582 01:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.582 01:28:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.582 01:28:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:47.582 01:28:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:47.582 01:28:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:47.582 01:28:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.582 01:28:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.582 01:28:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.582 ************************************ 00:06:47.582 START TEST rpc_daemon_integrity 00:06:47.582 ************************************ 00:06:47.582 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:47.582 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:47.582 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.582 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.582 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.582 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:47.582 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:47.841 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:47.841 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:47.842 { 00:06:47.842 "name": "Malloc2", 00:06:47.842 "aliases": [ 00:06:47.842 "cba35071-95d5-483d-bf72-dcdc97cb975a" 00:06:47.842 ], 00:06:47.842 "product_name": "Malloc disk", 00:06:47.842 "block_size": 512, 00:06:47.842 "num_blocks": 16384, 00:06:47.842 "uuid": "cba35071-95d5-483d-bf72-dcdc97cb975a", 00:06:47.842 "assigned_rate_limits": { 00:06:47.842 "rw_ios_per_sec": 0, 00:06:47.842 "rw_mbytes_per_sec": 0, 00:06:47.842 "r_mbytes_per_sec": 0, 00:06:47.842 "w_mbytes_per_sec": 0 00:06:47.842 }, 00:06:47.842 "claimed": false, 00:06:47.842 "zoned": false, 00:06:47.842 "supported_io_types": { 00:06:47.842 "read": true, 00:06:47.842 "write": true, 00:06:47.842 "unmap": true, 00:06:47.842 "flush": true, 00:06:47.842 "reset": true, 00:06:47.842 "nvme_admin": false, 00:06:47.842 "nvme_io": false, 00:06:47.842 "nvme_io_md": false, 00:06:47.842 "write_zeroes": true, 00:06:47.842 "zcopy": true, 00:06:47.842 "get_zone_info": false, 00:06:47.842 "zone_management": false, 00:06:47.842 "zone_append": false, 00:06:47.842 "compare": false, 00:06:47.842 "compare_and_write": false, 00:06:47.842 "abort": true, 00:06:47.842 "seek_hole": false, 00:06:47.842 "seek_data": false, 00:06:47.842 "copy": true, 00:06:47.842 "nvme_iov_md": false 00:06:47.842 }, 00:06:47.842 "memory_domains": [ 00:06:47.842 { 00:06:47.842 "dma_device_id": "system", 00:06:47.842 "dma_device_type": 1 00:06:47.842 }, 00:06:47.842 { 00:06:47.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.842 "dma_device_type": 2 00:06:47.842 } 00:06:47.842 ], 00:06:47.842 "driver_specific": {} 00:06:47.842 } 00:06:47.842 ]' 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 [2024-12-16 01:28:18.330935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:47.842 [2024-12-16 01:28:18.331149] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.842 [2024-12-16 01:28:18.331213] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1230930 00:06:47.842 [2024-12-16 01:28:18.331328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.842 [2024-12-16 01:28:18.332968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.842 [2024-12-16 01:28:18.333137] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:47.842 Passthru0 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:47.842 { 00:06:47.842 "name": "Malloc2", 00:06:47.842 "aliases": [ 00:06:47.842 "cba35071-95d5-483d-bf72-dcdc97cb975a" 00:06:47.842 ], 00:06:47.842 "product_name": "Malloc disk", 00:06:47.842 "block_size": 512, 00:06:47.842 "num_blocks": 16384, 00:06:47.842 "uuid": "cba35071-95d5-483d-bf72-dcdc97cb975a", 00:06:47.842 "assigned_rate_limits": { 00:06:47.842 "rw_ios_per_sec": 0, 00:06:47.842 "rw_mbytes_per_sec": 0, 00:06:47.842 "r_mbytes_per_sec": 0, 00:06:47.842 "w_mbytes_per_sec": 0 00:06:47.842 }, 00:06:47.842 "claimed": true, 00:06:47.842 "claim_type": "exclusive_write", 00:06:47.842 "zoned": false, 00:06:47.842 "supported_io_types": { 00:06:47.842 "read": true, 00:06:47.842 "write": true, 00:06:47.842 "unmap": true, 00:06:47.842 "flush": true, 00:06:47.842 "reset": true, 00:06:47.842 "nvme_admin": false, 00:06:47.842 "nvme_io": false, 00:06:47.842 "nvme_io_md": false, 00:06:47.842 "write_zeroes": true, 00:06:47.842 "zcopy": true, 00:06:47.842 "get_zone_info": false, 00:06:47.842 "zone_management": false, 00:06:47.842 "zone_append": false, 00:06:47.842 "compare": false, 00:06:47.842 "compare_and_write": false, 00:06:47.842 "abort": true, 00:06:47.842 "seek_hole": false, 00:06:47.842 "seek_data": false, 00:06:47.842 "copy": true, 00:06:47.842 "nvme_iov_md": false 00:06:47.842 }, 00:06:47.842 "memory_domains": [ 00:06:47.842 { 00:06:47.842 "dma_device_id": "system", 00:06:47.842 "dma_device_type": 1 00:06:47.842 }, 00:06:47.842 { 00:06:47.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.842 "dma_device_type": 2 00:06:47.842 } 00:06:47.842 ], 00:06:47.842 "driver_specific": {} 00:06:47.842 }, 00:06:47.842 { 00:06:47.842 "name": "Passthru0", 00:06:47.842 "aliases": [ 00:06:47.842 "cfcbca34-3c7b-541c-94ee-2a99e12e4ff9" 00:06:47.842 ], 00:06:47.842 "product_name": "passthru", 00:06:47.842 "block_size": 512, 00:06:47.842 "num_blocks": 16384, 00:06:47.842 "uuid": "cfcbca34-3c7b-541c-94ee-2a99e12e4ff9", 00:06:47.842 "assigned_rate_limits": { 00:06:47.842 "rw_ios_per_sec": 0, 00:06:47.842 "rw_mbytes_per_sec": 0, 00:06:47.842 "r_mbytes_per_sec": 0, 00:06:47.842 "w_mbytes_per_sec": 0 00:06:47.842 }, 00:06:47.842 "claimed": false, 00:06:47.842 "zoned": false, 00:06:47.842 "supported_io_types": { 00:06:47.842 "read": true, 00:06:47.842 "write": true, 00:06:47.842 "unmap": true, 00:06:47.842 "flush": true, 00:06:47.842 "reset": true, 00:06:47.842 "nvme_admin": false, 00:06:47.842 "nvme_io": false, 00:06:47.842 "nvme_io_md": false, 00:06:47.842 "write_zeroes": true, 00:06:47.842 "zcopy": true, 00:06:47.842 "get_zone_info": false, 00:06:47.842 "zone_management": false, 00:06:47.842 "zone_append": false, 00:06:47.842 "compare": false, 00:06:47.842 "compare_and_write": false, 00:06:47.842 "abort": true, 00:06:47.842 "seek_hole": false, 00:06:47.842 "seek_data": false, 00:06:47.842 "copy": true, 00:06:47.842 "nvme_iov_md": false 00:06:47.842 }, 00:06:47.842 "memory_domains": [ 00:06:47.842 { 00:06:47.842 "dma_device_id": "system", 00:06:47.842 "dma_device_type": 1 00:06:47.842 }, 00:06:47.842 { 00:06:47.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.842 "dma_device_type": 2 00:06:47.842 } 00:06:47.842 ], 00:06:47.842 "driver_specific": { 00:06:47.842 "passthru": { 00:06:47.842 "name": "Passthru0", 00:06:47.842 "base_bdev_name": "Malloc2" 00:06:47.842 } 00:06:47.842 } 00:06:47.842 } 00:06:47.842 ]' 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:47.842 ************************************ 00:06:47.842 END TEST rpc_daemon_integrity 00:06:47.842 ************************************ 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:47.842 00:06:47.842 real 0m0.322s 00:06:47.842 user 0m0.220s 00:06:47.842 sys 0m0.041s 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.842 01:28:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:48.102 01:28:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:48.102 01:28:18 rpc -- rpc/rpc.sh@84 -- # killprocess 71765 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@954 -- # '[' -z 71765 ']' 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@958 -- # kill -0 71765 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@959 -- # uname 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71765 00:06:48.102 killing process with pid 71765 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71765' 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@973 -- # kill 71765 00:06:48.102 01:28:18 rpc -- common/autotest_common.sh@978 -- # wait 71765 00:06:48.361 ************************************ 00:06:48.361 END TEST rpc 00:06:48.361 ************************************ 00:06:48.361 00:06:48.361 real 0m2.135s 00:06:48.361 user 0m2.886s 00:06:48.361 sys 0m0.553s 00:06:48.361 01:28:18 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.361 01:28:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.361 01:28:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:48.361 01:28:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.361 01:28:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.361 01:28:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.361 ************************************ 00:06:48.361 START TEST skip_rpc 00:06:48.361 ************************************ 00:06:48.361 01:28:18 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:48.361 * Looking for test storage... 00:06:48.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:48.361 01:28:18 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:48.361 01:28:18 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:48.361 01:28:18 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:48.620 01:28:19 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.620 01:28:19 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.621 01:28:19 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.621 --rc genhtml_branch_coverage=1 00:06:48.621 --rc genhtml_function_coverage=1 00:06:48.621 --rc genhtml_legend=1 00:06:48.621 --rc geninfo_all_blocks=1 00:06:48.621 --rc geninfo_unexecuted_blocks=1 00:06:48.621 00:06:48.621 ' 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.621 --rc genhtml_branch_coverage=1 00:06:48.621 --rc genhtml_function_coverage=1 00:06:48.621 --rc genhtml_legend=1 00:06:48.621 --rc geninfo_all_blocks=1 00:06:48.621 --rc geninfo_unexecuted_blocks=1 00:06:48.621 00:06:48.621 ' 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.621 --rc genhtml_branch_coverage=1 00:06:48.621 --rc genhtml_function_coverage=1 00:06:48.621 --rc genhtml_legend=1 00:06:48.621 --rc geninfo_all_blocks=1 00:06:48.621 --rc geninfo_unexecuted_blocks=1 00:06:48.621 00:06:48.621 ' 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:48.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.621 --rc genhtml_branch_coverage=1 00:06:48.621 --rc genhtml_function_coverage=1 00:06:48.621 --rc genhtml_legend=1 00:06:48.621 --rc geninfo_all_blocks=1 00:06:48.621 --rc geninfo_unexecuted_blocks=1 00:06:48.621 00:06:48.621 ' 00:06:48.621 01:28:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:48.621 01:28:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:48.621 01:28:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.621 01:28:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.621 ************************************ 00:06:48.621 START TEST skip_rpc 00:06:48.621 ************************************ 00:06:48.621 01:28:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:48.621 01:28:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71958 00:06:48.621 01:28:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:48.621 01:28:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:48.621 01:28:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:48.621 [2024-12-16 01:28:19.126834] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:48.621 [2024-12-16 01:28:19.127108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71958 ] 00:06:48.621 [2024-12-16 01:28:19.274444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.880 [2024-12-16 01:28:19.293654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.880 [2024-12-16 01:28:19.327980] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:54.153 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71958 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71958 ']' 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71958 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71958 00:06:54.154 killing process with pid 71958 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71958' 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71958 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71958 00:06:54.154 00:06:54.154 real 0m5.269s 00:06:54.154 user 0m5.004s 00:06:54.154 sys 0m0.184s 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.154 01:28:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.154 ************************************ 00:06:54.154 END TEST skip_rpc 00:06:54.154 ************************************ 00:06:54.154 01:28:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:54.154 01:28:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.154 01:28:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.154 01:28:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.154 ************************************ 00:06:54.154 START TEST skip_rpc_with_json 00:06:54.154 ************************************ 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=72039 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 72039 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 72039 ']' 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.154 01:28:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:54.154 [2024-12-16 01:28:24.452366] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:54.154 [2024-12-16 01:28:24.452731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72039 ] 00:06:54.154 [2024-12-16 01:28:24.591462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.154 [2024-12-16 01:28:24.610864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.154 [2024-12-16 01:28:24.646322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.091 [2024-12-16 01:28:25.414836] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:55.091 request: 00:06:55.091 { 00:06:55.091 "trtype": "tcp", 00:06:55.091 "method": "nvmf_get_transports", 00:06:55.091 "req_id": 1 00:06:55.091 } 00:06:55.091 Got JSON-RPC error response 00:06:55.091 response: 00:06:55.091 { 00:06:55.091 "code": -19, 00:06:55.091 "message": "No such device" 00:06:55.091 } 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.091 [2024-12-16 01:28:25.426952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.091 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:55.091 { 00:06:55.091 "subsystems": [ 00:06:55.091 { 00:06:55.091 "subsystem": "fsdev", 00:06:55.091 "config": [ 00:06:55.091 { 00:06:55.091 "method": "fsdev_set_opts", 00:06:55.091 "params": { 00:06:55.091 "fsdev_io_pool_size": 65535, 00:06:55.091 "fsdev_io_cache_size": 256 00:06:55.091 } 00:06:55.091 } 00:06:55.091 ] 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "subsystem": "vfio_user_target", 00:06:55.091 "config": null 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "subsystem": "keyring", 00:06:55.091 "config": [] 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "subsystem": "iobuf", 00:06:55.091 "config": [ 00:06:55.091 { 00:06:55.091 "method": "iobuf_set_options", 00:06:55.091 "params": { 00:06:55.091 "small_pool_count": 8192, 00:06:55.091 "large_pool_count": 1024, 00:06:55.091 "small_bufsize": 8192, 00:06:55.091 "large_bufsize": 135168, 00:06:55.091 "enable_numa": false 00:06:55.091 } 00:06:55.091 } 00:06:55.091 ] 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "subsystem": "sock", 00:06:55.091 "config": [ 00:06:55.091 { 00:06:55.091 "method": "sock_set_default_impl", 00:06:55.091 "params": { 00:06:55.091 "impl_name": "uring" 00:06:55.091 } 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "method": "sock_impl_set_options", 00:06:55.091 "params": { 00:06:55.091 "impl_name": "ssl", 00:06:55.091 "recv_buf_size": 4096, 00:06:55.091 "send_buf_size": 4096, 00:06:55.091 "enable_recv_pipe": true, 00:06:55.091 "enable_quickack": false, 00:06:55.091 "enable_placement_id": 0, 00:06:55.091 "enable_zerocopy_send_server": true, 00:06:55.091 "enable_zerocopy_send_client": false, 00:06:55.091 "zerocopy_threshold": 0, 00:06:55.091 "tls_version": 0, 00:06:55.091 "enable_ktls": false 00:06:55.091 } 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "method": "sock_impl_set_options", 00:06:55.091 "params": { 00:06:55.091 "impl_name": "posix", 00:06:55.091 "recv_buf_size": 2097152, 00:06:55.091 "send_buf_size": 2097152, 00:06:55.091 "enable_recv_pipe": true, 00:06:55.091 "enable_quickack": false, 00:06:55.091 "enable_placement_id": 0, 00:06:55.091 "enable_zerocopy_send_server": true, 00:06:55.091 "enable_zerocopy_send_client": false, 00:06:55.091 "zerocopy_threshold": 0, 00:06:55.091 "tls_version": 0, 00:06:55.091 "enable_ktls": false 00:06:55.091 } 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "method": "sock_impl_set_options", 00:06:55.091 "params": { 00:06:55.091 "impl_name": "uring", 00:06:55.091 "recv_buf_size": 2097152, 00:06:55.091 "send_buf_size": 2097152, 00:06:55.091 "enable_recv_pipe": true, 00:06:55.091 "enable_quickack": false, 00:06:55.091 "enable_placement_id": 0, 00:06:55.091 "enable_zerocopy_send_server": false, 00:06:55.091 "enable_zerocopy_send_client": false, 00:06:55.091 "zerocopy_threshold": 0, 00:06:55.091 "tls_version": 0, 00:06:55.091 "enable_ktls": false 00:06:55.091 } 00:06:55.091 } 00:06:55.091 ] 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "subsystem": "vmd", 00:06:55.091 "config": [] 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "subsystem": "accel", 00:06:55.091 "config": [ 00:06:55.091 { 00:06:55.091 "method": "accel_set_options", 00:06:55.091 "params": { 00:06:55.091 "small_cache_size": 128, 00:06:55.091 "large_cache_size": 16, 00:06:55.091 "task_count": 2048, 00:06:55.091 "sequence_count": 2048, 00:06:55.091 "buf_count": 2048 00:06:55.091 } 00:06:55.091 } 00:06:55.091 ] 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "subsystem": "bdev", 00:06:55.091 "config": [ 00:06:55.091 { 00:06:55.091 "method": "bdev_set_options", 00:06:55.091 "params": { 00:06:55.091 "bdev_io_pool_size": 65535, 00:06:55.091 "bdev_io_cache_size": 256, 00:06:55.091 "bdev_auto_examine": true, 00:06:55.091 "iobuf_small_cache_size": 128, 00:06:55.091 "iobuf_large_cache_size": 16 00:06:55.091 } 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "method": "bdev_raid_set_options", 00:06:55.091 "params": { 00:06:55.091 "process_window_size_kb": 1024, 00:06:55.091 "process_max_bandwidth_mb_sec": 0 00:06:55.091 } 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "method": "bdev_iscsi_set_options", 00:06:55.091 "params": { 00:06:55.091 "timeout_sec": 30 00:06:55.091 } 00:06:55.091 }, 00:06:55.091 { 00:06:55.091 "method": "bdev_nvme_set_options", 00:06:55.091 "params": { 00:06:55.091 "action_on_timeout": "none", 00:06:55.092 "timeout_us": 0, 00:06:55.092 "timeout_admin_us": 0, 00:06:55.092 "keep_alive_timeout_ms": 10000, 00:06:55.092 "arbitration_burst": 0, 00:06:55.092 "low_priority_weight": 0, 00:06:55.092 "medium_priority_weight": 0, 00:06:55.092 "high_priority_weight": 0, 00:06:55.092 "nvme_adminq_poll_period_us": 10000, 00:06:55.092 "nvme_ioq_poll_period_us": 0, 00:06:55.092 "io_queue_requests": 0, 00:06:55.092 "delay_cmd_submit": true, 00:06:55.092 "transport_retry_count": 4, 00:06:55.092 "bdev_retry_count": 3, 00:06:55.092 "transport_ack_timeout": 0, 00:06:55.092 "ctrlr_loss_timeout_sec": 0, 00:06:55.092 "reconnect_delay_sec": 0, 00:06:55.092 "fast_io_fail_timeout_sec": 0, 00:06:55.092 "disable_auto_failback": false, 00:06:55.092 "generate_uuids": false, 00:06:55.092 "transport_tos": 0, 00:06:55.092 "nvme_error_stat": false, 00:06:55.092 "rdma_srq_size": 0, 00:06:55.092 "io_path_stat": false, 00:06:55.092 "allow_accel_sequence": false, 00:06:55.092 "rdma_max_cq_size": 0, 00:06:55.092 "rdma_cm_event_timeout_ms": 0, 00:06:55.092 "dhchap_digests": [ 00:06:55.092 "sha256", 00:06:55.092 "sha384", 00:06:55.092 "sha512" 00:06:55.092 ], 00:06:55.092 "dhchap_dhgroups": [ 00:06:55.092 "null", 00:06:55.092 "ffdhe2048", 00:06:55.092 "ffdhe3072", 00:06:55.092 "ffdhe4096", 00:06:55.092 "ffdhe6144", 00:06:55.092 "ffdhe8192" 00:06:55.092 ], 00:06:55.092 "rdma_umr_per_io": false 00:06:55.092 } 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "method": "bdev_nvme_set_hotplug", 00:06:55.092 "params": { 00:06:55.092 "period_us": 100000, 00:06:55.092 "enable": false 00:06:55.092 } 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "method": "bdev_wait_for_examine" 00:06:55.092 } 00:06:55.092 ] 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "scsi", 00:06:55.092 "config": null 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "scheduler", 00:06:55.092 "config": [ 00:06:55.092 { 00:06:55.092 "method": "framework_set_scheduler", 00:06:55.092 "params": { 00:06:55.092 "name": "static" 00:06:55.092 } 00:06:55.092 } 00:06:55.092 ] 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "vhost_scsi", 00:06:55.092 "config": [] 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "vhost_blk", 00:06:55.092 "config": [] 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "ublk", 00:06:55.092 "config": [] 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "nbd", 00:06:55.092 "config": [] 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "nvmf", 00:06:55.092 "config": [ 00:06:55.092 { 00:06:55.092 "method": "nvmf_set_config", 00:06:55.092 "params": { 00:06:55.092 "discovery_filter": "match_any", 00:06:55.092 "admin_cmd_passthru": { 00:06:55.092 "identify_ctrlr": false 00:06:55.092 }, 00:06:55.092 "dhchap_digests": [ 00:06:55.092 "sha256", 00:06:55.092 "sha384", 00:06:55.092 "sha512" 00:06:55.092 ], 00:06:55.092 "dhchap_dhgroups": [ 00:06:55.092 "null", 00:06:55.092 "ffdhe2048", 00:06:55.092 "ffdhe3072", 00:06:55.092 "ffdhe4096", 00:06:55.092 "ffdhe6144", 00:06:55.092 "ffdhe8192" 00:06:55.092 ] 00:06:55.092 } 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "method": "nvmf_set_max_subsystems", 00:06:55.092 "params": { 00:06:55.092 "max_subsystems": 1024 00:06:55.092 } 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "method": "nvmf_set_crdt", 00:06:55.092 "params": { 00:06:55.092 "crdt1": 0, 00:06:55.092 "crdt2": 0, 00:06:55.092 "crdt3": 0 00:06:55.092 } 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "method": "nvmf_create_transport", 00:06:55.092 "params": { 00:06:55.092 "trtype": "TCP", 00:06:55.092 "max_queue_depth": 128, 00:06:55.092 "max_io_qpairs_per_ctrlr": 127, 00:06:55.092 "in_capsule_data_size": 4096, 00:06:55.092 "max_io_size": 131072, 00:06:55.092 "io_unit_size": 131072, 00:06:55.092 "max_aq_depth": 128, 00:06:55.092 "num_shared_buffers": 511, 00:06:55.092 "buf_cache_size": 4294967295, 00:06:55.092 "dif_insert_or_strip": false, 00:06:55.092 "zcopy": false, 00:06:55.092 "c2h_success": true, 00:06:55.092 "sock_priority": 0, 00:06:55.092 "abort_timeout_sec": 1, 00:06:55.092 "ack_timeout": 0, 00:06:55.092 "data_wr_pool_size": 0 00:06:55.092 } 00:06:55.092 } 00:06:55.092 ] 00:06:55.092 }, 00:06:55.092 { 00:06:55.092 "subsystem": "iscsi", 00:06:55.092 "config": [ 00:06:55.092 { 00:06:55.092 "method": "iscsi_set_options", 00:06:55.092 "params": { 00:06:55.092 "node_base": "iqn.2016-06.io.spdk", 00:06:55.092 "max_sessions": 128, 00:06:55.092 "max_connections_per_session": 2, 00:06:55.092 "max_queue_depth": 64, 00:06:55.092 "default_time2wait": 2, 00:06:55.092 "default_time2retain": 20, 00:06:55.092 "first_burst_length": 8192, 00:06:55.092 "immediate_data": true, 00:06:55.092 "allow_duplicated_isid": false, 00:06:55.092 "error_recovery_level": 0, 00:06:55.092 "nop_timeout": 60, 00:06:55.092 "nop_in_interval": 30, 00:06:55.092 "disable_chap": false, 00:06:55.092 "require_chap": false, 00:06:55.092 "mutual_chap": false, 00:06:55.092 "chap_group": 0, 00:06:55.092 "max_large_datain_per_connection": 64, 00:06:55.092 "max_r2t_per_connection": 4, 00:06:55.092 "pdu_pool_size": 36864, 00:06:55.092 "immediate_data_pool_size": 16384, 00:06:55.092 "data_out_pool_size": 2048 00:06:55.092 } 00:06:55.092 } 00:06:55.092 ] 00:06:55.092 } 00:06:55.092 ] 00:06:55.092 } 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 72039 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 72039 ']' 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 72039 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72039 00:06:55.092 killing process with pid 72039 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72039' 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 72039 00:06:55.092 01:28:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 72039 00:06:55.351 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=72067 00:06:55.351 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:55.351 01:28:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 72067 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 72067 ']' 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 72067 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72067 00:07:00.624 killing process with pid 72067 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72067' 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 72067 00:07:00.624 01:28:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 72067 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:00.624 00:07:00.624 real 0m6.743s 00:07:00.624 user 0m6.705s 00:07:00.624 sys 0m0.433s 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:00.624 ************************************ 00:07:00.624 END TEST skip_rpc_with_json 00:07:00.624 ************************************ 00:07:00.624 01:28:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:00.624 01:28:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.624 01:28:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.624 01:28:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.624 ************************************ 00:07:00.624 START TEST skip_rpc_with_delay 00:07:00.624 ************************************ 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:00.624 [2024-12-16 01:28:31.252233] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.624 00:07:00.624 real 0m0.097s 00:07:00.624 user 0m0.064s 00:07:00.624 sys 0m0.031s 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.624 01:28:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:00.624 ************************************ 00:07:00.624 END TEST skip_rpc_with_delay 00:07:00.624 ************************************ 00:07:00.901 01:28:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:00.901 01:28:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:00.901 01:28:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:00.901 01:28:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.901 01:28:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.901 01:28:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.901 ************************************ 00:07:00.901 START TEST exit_on_failed_rpc_init 00:07:00.901 ************************************ 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72176 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 72176 00:07:00.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 72176 ']' 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.901 01:28:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:00.901 [2024-12-16 01:28:31.402454] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:00.901 [2024-12-16 01:28:31.402789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72176 ] 00:07:01.172 [2024-12-16 01:28:31.546327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.173 [2024-12-16 01:28:31.568653] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.173 [2024-12-16 01:28:31.604691] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.740 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:01.741 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:02.000 [2024-12-16 01:28:32.442950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:02.000 [2024-12-16 01:28:32.443042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72194 ] 00:07:02.000 [2024-12-16 01:28:32.595074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.000 [2024-12-16 01:28:32.618788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.000 [2024-12-16 01:28:32.618901] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:02.000 [2024-12-16 01:28:32.618919] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:02.000 [2024-12-16 01:28:32.618930] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 72176 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 72176 ']' 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 72176 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72176 00:07:02.259 killing process with pid 72176 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72176' 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 72176 00:07:02.259 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 72176 00:07:02.518 ************************************ 00:07:02.518 END TEST exit_on_failed_rpc_init 00:07:02.518 ************************************ 00:07:02.518 00:07:02.518 real 0m1.589s 00:07:02.518 user 0m1.915s 00:07:02.518 sys 0m0.312s 00:07:02.518 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.518 01:28:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:02.518 01:28:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:02.518 00:07:02.518 real 0m14.114s 00:07:02.518 user 0m13.879s 00:07:02.518 sys 0m1.165s 00:07:02.518 01:28:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.518 ************************************ 00:07:02.518 END TEST skip_rpc 00:07:02.518 ************************************ 00:07:02.518 01:28:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.518 01:28:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:02.518 01:28:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.518 01:28:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.518 01:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.518 ************************************ 00:07:02.518 START TEST rpc_client 00:07:02.518 ************************************ 00:07:02.518 01:28:33 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:02.518 * Looking for test storage... 00:07:02.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:02.518 01:28:33 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:02.518 01:28:33 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:02.518 01:28:33 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:02.518 01:28:33 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.518 01:28:33 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.778 01:28:33 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:02.778 01:28:33 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.778 01:28:33 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:02.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.778 --rc genhtml_branch_coverage=1 00:07:02.778 --rc genhtml_function_coverage=1 00:07:02.778 --rc genhtml_legend=1 00:07:02.778 --rc geninfo_all_blocks=1 00:07:02.778 --rc geninfo_unexecuted_blocks=1 00:07:02.778 00:07:02.778 ' 00:07:02.778 01:28:33 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:02.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.778 --rc genhtml_branch_coverage=1 00:07:02.778 --rc genhtml_function_coverage=1 00:07:02.778 --rc genhtml_legend=1 00:07:02.778 --rc geninfo_all_blocks=1 00:07:02.778 --rc geninfo_unexecuted_blocks=1 00:07:02.778 00:07:02.778 ' 00:07:02.778 01:28:33 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:02.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.778 --rc genhtml_branch_coverage=1 00:07:02.778 --rc genhtml_function_coverage=1 00:07:02.778 --rc genhtml_legend=1 00:07:02.778 --rc geninfo_all_blocks=1 00:07:02.778 --rc geninfo_unexecuted_blocks=1 00:07:02.778 00:07:02.778 ' 00:07:02.778 01:28:33 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:02.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.778 --rc genhtml_branch_coverage=1 00:07:02.778 --rc genhtml_function_coverage=1 00:07:02.778 --rc genhtml_legend=1 00:07:02.778 --rc geninfo_all_blocks=1 00:07:02.778 --rc geninfo_unexecuted_blocks=1 00:07:02.778 00:07:02.778 ' 00:07:02.778 01:28:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:02.778 OK 00:07:02.778 01:28:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:02.778 ************************************ 00:07:02.778 END TEST rpc_client 00:07:02.778 ************************************ 00:07:02.778 00:07:02.778 real 0m0.191s 00:07:02.778 user 0m0.119s 00:07:02.778 sys 0m0.080s 00:07:02.778 01:28:33 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.778 01:28:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:02.778 01:28:33 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:02.778 01:28:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.778 01:28:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.778 01:28:33 -- common/autotest_common.sh@10 -- # set +x 00:07:02.778 ************************************ 00:07:02.778 START TEST json_config 00:07:02.778 ************************************ 00:07:02.778 01:28:33 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:02.778 01:28:33 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:02.778 01:28:33 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:02.778 01:28:33 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:02.778 01:28:33 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:02.778 01:28:33 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.778 01:28:33 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.778 01:28:33 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.778 01:28:33 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.778 01:28:33 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.778 01:28:33 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.779 01:28:33 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.779 01:28:33 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.779 01:28:33 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.779 01:28:33 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.779 01:28:33 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.779 01:28:33 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:02.779 01:28:33 json_config -- scripts/common.sh@345 -- # : 1 00:07:02.779 01:28:33 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.779 01:28:33 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.779 01:28:33 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:02.779 01:28:33 json_config -- scripts/common.sh@353 -- # local d=1 00:07:02.779 01:28:33 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.779 01:28:33 json_config -- scripts/common.sh@355 -- # echo 1 00:07:02.779 01:28:33 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.779 01:28:33 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:02.779 01:28:33 json_config -- scripts/common.sh@353 -- # local d=2 00:07:02.779 01:28:33 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.779 01:28:33 json_config -- scripts/common.sh@355 -- # echo 2 00:07:02.779 01:28:33 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.779 01:28:33 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.779 01:28:33 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.779 01:28:33 json_config -- scripts/common.sh@368 -- # return 0 00:07:02.779 01:28:33 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.779 01:28:33 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 01:28:33 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 01:28:33 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 01:28:33 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:02.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.779 --rc genhtml_branch_coverage=1 00:07:02.779 --rc genhtml_function_coverage=1 00:07:02.779 --rc genhtml_legend=1 00:07:02.779 --rc geninfo_all_blocks=1 00:07:02.779 --rc geninfo_unexecuted_blocks=1 00:07:02.779 00:07:02.779 ' 00:07:02.779 01:28:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.779 01:28:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:03.039 01:28:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.039 01:28:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.039 01:28:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.039 01:28:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.039 01:28:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.039 01:28:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.039 01:28:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.039 01:28:33 json_config -- paths/export.sh@5 -- # export PATH 00:07:03.039 01:28:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@51 -- # : 0 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.039 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.039 01:28:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:03.039 INFO: JSON configuration test init 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.039 01:28:33 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:03.039 01:28:33 json_config -- json_config/common.sh@9 -- # local app=target 00:07:03.039 01:28:33 json_config -- json_config/common.sh@10 -- # shift 00:07:03.039 01:28:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:03.039 01:28:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:03.039 01:28:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:03.039 01:28:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.039 01:28:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:03.039 01:28:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72328 00:07:03.039 01:28:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:03.039 Waiting for target to run... 00:07:03.039 01:28:33 json_config -- json_config/common.sh@25 -- # waitforlisten 72328 /var/tmp/spdk_tgt.sock 00:07:03.039 01:28:33 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@835 -- # '[' -z 72328 ']' 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.039 01:28:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:03.039 [2024-12-16 01:28:33.521797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:03.039 [2024-12-16 01:28:33.522080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72328 ] 00:07:03.298 [2024-12-16 01:28:33.803728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.298 [2024-12-16 01:28:33.815736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.235 00:07:04.235 01:28:34 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.235 01:28:34 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:04.235 01:28:34 json_config -- json_config/common.sh@26 -- # echo '' 00:07:04.235 01:28:34 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:04.235 01:28:34 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:04.236 01:28:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.236 01:28:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.236 01:28:34 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:04.236 01:28:34 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:04.236 01:28:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.236 01:28:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.236 01:28:34 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:04.236 01:28:34 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:04.236 01:28:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:04.236 [2024-12-16 01:28:34.881967] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:04.495 01:28:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.495 01:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:04.495 01:28:35 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:04.495 01:28:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@54 -- # sort 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:04.754 01:28:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.754 01:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:04.754 01:28:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.754 01:28:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:04.754 01:28:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:04.754 01:28:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:05.013 MallocForNvmf0 00:07:05.013 01:28:35 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:05.013 01:28:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:05.272 MallocForNvmf1 00:07:05.272 01:28:35 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:05.272 01:28:35 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:05.531 [2024-12-16 01:28:36.176990] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.790 01:28:36 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:05.790 01:28:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:06.049 01:28:36 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:06.049 01:28:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:06.049 01:28:36 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:06.049 01:28:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:06.309 01:28:36 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:06.309 01:28:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:06.568 [2024-12-16 01:28:37.137560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:06.568 01:28:37 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:06.568 01:28:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.568 01:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.568 01:28:37 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:06.568 01:28:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.568 01:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:06.827 01:28:37 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:06.827 01:28:37 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:06.827 01:28:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:06.827 MallocBdevForConfigChangeCheck 00:07:06.827 01:28:37 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:06.827 01:28:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:06.827 01:28:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:07.086 01:28:37 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:07.086 01:28:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:07.345 INFO: shutting down applications... 00:07:07.345 01:28:37 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:07.345 01:28:37 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:07.345 01:28:37 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:07.345 01:28:37 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:07.345 01:28:37 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:07.605 Calling clear_iscsi_subsystem 00:07:07.605 Calling clear_nvmf_subsystem 00:07:07.605 Calling clear_nbd_subsystem 00:07:07.605 Calling clear_ublk_subsystem 00:07:07.605 Calling clear_vhost_blk_subsystem 00:07:07.605 Calling clear_vhost_scsi_subsystem 00:07:07.605 Calling clear_bdev_subsystem 00:07:07.605 01:28:38 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:07.605 01:28:38 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:07.605 01:28:38 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:07.605 01:28:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:07.605 01:28:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:07.605 01:28:38 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:08.173 01:28:38 json_config -- json_config/json_config.sh@352 -- # break 00:07:08.173 01:28:38 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:08.173 01:28:38 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:08.173 01:28:38 json_config -- json_config/common.sh@31 -- # local app=target 00:07:08.173 01:28:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:08.173 01:28:38 json_config -- json_config/common.sh@35 -- # [[ -n 72328 ]] 00:07:08.173 01:28:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 72328 00:07:08.173 01:28:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:08.173 01:28:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:08.173 01:28:38 json_config -- json_config/common.sh@41 -- # kill -0 72328 00:07:08.173 01:28:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:08.741 01:28:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:08.741 SPDK target shutdown done 00:07:08.741 INFO: relaunching applications... 00:07:08.742 01:28:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:08.742 01:28:39 json_config -- json_config/common.sh@41 -- # kill -0 72328 00:07:08.742 01:28:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:08.742 01:28:39 json_config -- json_config/common.sh@43 -- # break 00:07:08.742 01:28:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:08.742 01:28:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:08.742 01:28:39 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:08.742 01:28:39 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:08.742 01:28:39 json_config -- json_config/common.sh@9 -- # local app=target 00:07:08.742 01:28:39 json_config -- json_config/common.sh@10 -- # shift 00:07:08.742 01:28:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:08.742 01:28:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:08.742 01:28:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:08.742 01:28:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.742 01:28:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:08.742 01:28:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=72524 00:07:08.742 01:28:39 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:08.742 Waiting for target to run... 00:07:08.742 01:28:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:08.742 01:28:39 json_config -- json_config/common.sh@25 -- # waitforlisten 72524 /var/tmp/spdk_tgt.sock 00:07:08.742 01:28:39 json_config -- common/autotest_common.sh@835 -- # '[' -z 72524 ']' 00:07:08.742 01:28:39 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:08.742 01:28:39 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.742 01:28:39 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:08.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:08.742 01:28:39 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.742 01:28:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:08.742 [2024-12-16 01:28:39.196568] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:08.742 [2024-12-16 01:28:39.196884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72524 ] 00:07:09.001 [2024-12-16 01:28:39.491147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.001 [2024-12-16 01:28:39.503059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.001 [2024-12-16 01:28:39.630710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.261 [2024-12-16 01:28:39.819448] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.261 [2024-12-16 01:28:39.851491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:09.520 00:07:09.520 INFO: Checking if target configuration is the same... 00:07:09.520 01:28:40 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.520 01:28:40 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:09.520 01:28:40 json_config -- json_config/common.sh@26 -- # echo '' 00:07:09.520 01:28:40 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:09.520 01:28:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:09.520 01:28:40 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:09.520 01:28:40 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:09.520 01:28:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:09.520 + '[' 2 -ne 2 ']' 00:07:09.520 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:09.520 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:09.520 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:09.520 +++ basename /dev/fd/62 00:07:09.520 ++ mktemp /tmp/62.XXX 00:07:09.520 + tmp_file_1=/tmp/62.4J2 00:07:09.520 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:09.520 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:09.520 + tmp_file_2=/tmp/spdk_tgt_config.json.Bam 00:07:09.520 + ret=0 00:07:09.520 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:10.088 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:10.088 + diff -u /tmp/62.4J2 /tmp/spdk_tgt_config.json.Bam 00:07:10.088 INFO: JSON config files are the same 00:07:10.088 + echo 'INFO: JSON config files are the same' 00:07:10.088 + rm /tmp/62.4J2 /tmp/spdk_tgt_config.json.Bam 00:07:10.088 + exit 0 00:07:10.088 INFO: changing configuration and checking if this can be detected... 00:07:10.088 01:28:40 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:10.088 01:28:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:10.088 01:28:40 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:10.088 01:28:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:10.347 01:28:40 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:10.348 01:28:40 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:10.348 01:28:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:10.348 + '[' 2 -ne 2 ']' 00:07:10.348 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:10.348 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:10.348 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:10.348 +++ basename /dev/fd/62 00:07:10.348 ++ mktemp /tmp/62.XXX 00:07:10.348 + tmp_file_1=/tmp/62.SZN 00:07:10.348 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:10.348 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:10.348 + tmp_file_2=/tmp/spdk_tgt_config.json.JHi 00:07:10.348 + ret=0 00:07:10.348 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:10.917 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:10.917 + diff -u /tmp/62.SZN /tmp/spdk_tgt_config.json.JHi 00:07:10.917 + ret=1 00:07:10.917 + echo '=== Start of file: /tmp/62.SZN ===' 00:07:10.917 + cat /tmp/62.SZN 00:07:10.917 + echo '=== End of file: /tmp/62.SZN ===' 00:07:10.917 + echo '' 00:07:10.917 + echo '=== Start of file: /tmp/spdk_tgt_config.json.JHi ===' 00:07:10.917 + cat /tmp/spdk_tgt_config.json.JHi 00:07:10.917 + echo '=== End of file: /tmp/spdk_tgt_config.json.JHi ===' 00:07:10.917 + echo '' 00:07:10.917 + rm /tmp/62.SZN /tmp/spdk_tgt_config.json.JHi 00:07:10.917 + exit 1 00:07:10.917 INFO: configuration change detected. 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@324 -- # [[ -n 72524 ]] 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.917 01:28:41 json_config -- json_config/json_config.sh@330 -- # killprocess 72524 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@954 -- # '[' -z 72524 ']' 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@958 -- # kill -0 72524 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@959 -- # uname 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72524 00:07:10.917 killing process with pid 72524 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72524' 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@973 -- # kill 72524 00:07:10.917 01:28:41 json_config -- common/autotest_common.sh@978 -- # wait 72524 00:07:11.177 01:28:41 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:11.177 01:28:41 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:11.177 01:28:41 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:11.177 01:28:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.177 INFO: Success 00:07:11.177 01:28:41 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:11.177 01:28:41 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:11.177 00:07:11.177 real 0m8.446s 00:07:11.177 user 0m12.288s 00:07:11.177 sys 0m1.434s 00:07:11.177 ************************************ 00:07:11.177 END TEST json_config 00:07:11.177 ************************************ 00:07:11.177 01:28:41 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.177 01:28:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:11.177 01:28:41 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:11.177 01:28:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.177 01:28:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.177 01:28:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.177 ************************************ 00:07:11.177 START TEST json_config_extra_key 00:07:11.177 ************************************ 00:07:11.177 01:28:41 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:11.177 01:28:41 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.177 01:28:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.177 01:28:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.437 01:28:41 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:11.437 01:28:41 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.438 --rc genhtml_branch_coverage=1 00:07:11.438 --rc genhtml_function_coverage=1 00:07:11.438 --rc genhtml_legend=1 00:07:11.438 --rc geninfo_all_blocks=1 00:07:11.438 --rc geninfo_unexecuted_blocks=1 00:07:11.438 00:07:11.438 ' 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.438 --rc genhtml_branch_coverage=1 00:07:11.438 --rc genhtml_function_coverage=1 00:07:11.438 --rc genhtml_legend=1 00:07:11.438 --rc geninfo_all_blocks=1 00:07:11.438 --rc geninfo_unexecuted_blocks=1 00:07:11.438 00:07:11.438 ' 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.438 --rc genhtml_branch_coverage=1 00:07:11.438 --rc genhtml_function_coverage=1 00:07:11.438 --rc genhtml_legend=1 00:07:11.438 --rc geninfo_all_blocks=1 00:07:11.438 --rc geninfo_unexecuted_blocks=1 00:07:11.438 00:07:11.438 ' 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.438 --rc genhtml_branch_coverage=1 00:07:11.438 --rc genhtml_function_coverage=1 00:07:11.438 --rc genhtml_legend=1 00:07:11.438 --rc geninfo_all_blocks=1 00:07:11.438 --rc geninfo_unexecuted_blocks=1 00:07:11.438 00:07:11.438 ' 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.438 01:28:41 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.438 01:28:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.438 01:28:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.438 01:28:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.438 01:28:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:11.438 01:28:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.438 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.438 01:28:41 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:11.438 INFO: launching applications... 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:11.438 01:28:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=72672 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:11.438 Waiting for target to run... 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:11.438 01:28:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 72672 /var/tmp/spdk_tgt.sock 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 72672 ']' 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:11.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.438 01:28:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:11.438 [2024-12-16 01:28:42.029660] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:11.438 [2024-12-16 01:28:42.029985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72672 ] 00:07:11.698 [2024-12-16 01:28:42.338107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.698 [2024-12-16 01:28:42.352001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.957 [2024-12-16 01:28:42.374512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.525 00:07:12.525 INFO: shutting down applications... 00:07:12.525 01:28:43 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.525 01:28:43 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:12.525 01:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:12.525 01:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 72672 ]] 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 72672 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72672 00:07:12.525 01:28:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:13.093 01:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:13.093 01:28:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:13.093 01:28:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 72672 00:07:13.093 01:28:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:13.093 01:28:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:13.093 01:28:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:13.093 01:28:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:13.093 SPDK target shutdown done 00:07:13.093 01:28:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:13.093 Success 00:07:13.093 00:07:13.093 real 0m1.811s 00:07:13.093 user 0m1.688s 00:07:13.093 sys 0m0.320s 00:07:13.093 01:28:43 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.093 01:28:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:13.093 ************************************ 00:07:13.093 END TEST json_config_extra_key 00:07:13.093 ************************************ 00:07:13.093 01:28:43 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:13.093 01:28:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:13.093 01:28:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.093 01:28:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.093 ************************************ 00:07:13.093 START TEST alias_rpc 00:07:13.093 ************************************ 00:07:13.093 01:28:43 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:13.093 * Looking for test storage... 00:07:13.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:13.093 01:28:43 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:13.093 01:28:43 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:13.093 01:28:43 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:13.352 01:28:43 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.352 01:28:43 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:13.352 01:28:43 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.352 01:28:43 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:13.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.352 --rc genhtml_branch_coverage=1 00:07:13.352 --rc genhtml_function_coverage=1 00:07:13.353 --rc genhtml_legend=1 00:07:13.353 --rc geninfo_all_blocks=1 00:07:13.353 --rc geninfo_unexecuted_blocks=1 00:07:13.353 00:07:13.353 ' 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:13.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.353 --rc genhtml_branch_coverage=1 00:07:13.353 --rc genhtml_function_coverage=1 00:07:13.353 --rc genhtml_legend=1 00:07:13.353 --rc geninfo_all_blocks=1 00:07:13.353 --rc geninfo_unexecuted_blocks=1 00:07:13.353 00:07:13.353 ' 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:13.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.353 --rc genhtml_branch_coverage=1 00:07:13.353 --rc genhtml_function_coverage=1 00:07:13.353 --rc genhtml_legend=1 00:07:13.353 --rc geninfo_all_blocks=1 00:07:13.353 --rc geninfo_unexecuted_blocks=1 00:07:13.353 00:07:13.353 ' 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:13.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.353 --rc genhtml_branch_coverage=1 00:07:13.353 --rc genhtml_function_coverage=1 00:07:13.353 --rc genhtml_legend=1 00:07:13.353 --rc geninfo_all_blocks=1 00:07:13.353 --rc geninfo_unexecuted_blocks=1 00:07:13.353 00:07:13.353 ' 00:07:13.353 01:28:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.353 01:28:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=72750 00:07:13.353 01:28:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:13.353 01:28:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 72750 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 72750 ']' 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.353 01:28:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.353 [2024-12-16 01:28:43.897131] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:13.353 [2024-12-16 01:28:43.897226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72750 ] 00:07:13.612 [2024-12-16 01:28:44.042892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.612 [2024-12-16 01:28:44.062039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.612 [2024-12-16 01:28:44.097042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.612 01:28:44 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.612 01:28:44 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:13.612 01:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:13.871 01:28:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 72750 00:07:13.871 01:28:44 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 72750 ']' 00:07:13.871 01:28:44 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 72750 00:07:13.871 01:28:44 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72750 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.131 killing process with pid 72750 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72750' 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@973 -- # kill 72750 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@978 -- # wait 72750 00:07:14.131 ************************************ 00:07:14.131 END TEST alias_rpc 00:07:14.131 ************************************ 00:07:14.131 00:07:14.131 real 0m1.150s 00:07:14.131 user 0m1.319s 00:07:14.131 sys 0m0.327s 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.131 01:28:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.390 01:28:44 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:14.390 01:28:44 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:14.390 01:28:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:14.390 01:28:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.390 01:28:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.390 ************************************ 00:07:14.390 START TEST spdkcli_tcp 00:07:14.390 ************************************ 00:07:14.390 01:28:44 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:14.390 * Looking for test storage... 00:07:14.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:14.390 01:28:44 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.390 01:28:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.390 01:28:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.390 01:28:44 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.390 01:28:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:14.390 01:28:45 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.390 01:28:45 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:14.391 01:28:45 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:14.391 01:28:45 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.391 01:28:45 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:14.391 01:28:45 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.391 01:28:45 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.391 01:28:45 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.391 01:28:45 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.391 --rc genhtml_branch_coverage=1 00:07:14.391 --rc genhtml_function_coverage=1 00:07:14.391 --rc genhtml_legend=1 00:07:14.391 --rc geninfo_all_blocks=1 00:07:14.391 --rc geninfo_unexecuted_blocks=1 00:07:14.391 00:07:14.391 ' 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.391 --rc genhtml_branch_coverage=1 00:07:14.391 --rc genhtml_function_coverage=1 00:07:14.391 --rc genhtml_legend=1 00:07:14.391 --rc geninfo_all_blocks=1 00:07:14.391 --rc geninfo_unexecuted_blocks=1 00:07:14.391 00:07:14.391 ' 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.391 --rc genhtml_branch_coverage=1 00:07:14.391 --rc genhtml_function_coverage=1 00:07:14.391 --rc genhtml_legend=1 00:07:14.391 --rc geninfo_all_blocks=1 00:07:14.391 --rc geninfo_unexecuted_blocks=1 00:07:14.391 00:07:14.391 ' 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.391 --rc genhtml_branch_coverage=1 00:07:14.391 --rc genhtml_function_coverage=1 00:07:14.391 --rc genhtml_legend=1 00:07:14.391 --rc geninfo_all_blocks=1 00:07:14.391 --rc geninfo_unexecuted_blocks=1 00:07:14.391 00:07:14.391 ' 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=72821 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 72821 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 72821 ']' 00:07:14.391 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.391 01:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:14.651 [2024-12-16 01:28:45.086056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:14.651 [2024-12-16 01:28:45.086164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72821 ] 00:07:14.651 [2024-12-16 01:28:45.233964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.651 [2024-12-16 01:28:45.253882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.651 [2024-12-16 01:28:45.253889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.651 [2024-12-16 01:28:45.288814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.911 01:28:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.911 01:28:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:14.911 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=72831 00:07:14.911 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:14.911 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:15.170 [ 00:07:15.170 "bdev_malloc_delete", 00:07:15.170 "bdev_malloc_create", 00:07:15.170 "bdev_null_resize", 00:07:15.170 "bdev_null_delete", 00:07:15.170 "bdev_null_create", 00:07:15.170 "bdev_nvme_cuse_unregister", 00:07:15.170 "bdev_nvme_cuse_register", 00:07:15.170 "bdev_opal_new_user", 00:07:15.170 "bdev_opal_set_lock_state", 00:07:15.170 "bdev_opal_delete", 00:07:15.170 "bdev_opal_get_info", 00:07:15.170 "bdev_opal_create", 00:07:15.170 "bdev_nvme_opal_revert", 00:07:15.170 "bdev_nvme_opal_init", 00:07:15.170 "bdev_nvme_send_cmd", 00:07:15.170 "bdev_nvme_set_keys", 00:07:15.170 "bdev_nvme_get_path_iostat", 00:07:15.170 "bdev_nvme_get_mdns_discovery_info", 00:07:15.170 "bdev_nvme_stop_mdns_discovery", 00:07:15.170 "bdev_nvme_start_mdns_discovery", 00:07:15.170 "bdev_nvme_set_multipath_policy", 00:07:15.170 "bdev_nvme_set_preferred_path", 00:07:15.170 "bdev_nvme_get_io_paths", 00:07:15.170 "bdev_nvme_remove_error_injection", 00:07:15.170 "bdev_nvme_add_error_injection", 00:07:15.170 "bdev_nvme_get_discovery_info", 00:07:15.170 "bdev_nvme_stop_discovery", 00:07:15.170 "bdev_nvme_start_discovery", 00:07:15.170 "bdev_nvme_get_controller_health_info", 00:07:15.170 "bdev_nvme_disable_controller", 00:07:15.170 "bdev_nvme_enable_controller", 00:07:15.170 "bdev_nvme_reset_controller", 00:07:15.170 "bdev_nvme_get_transport_statistics", 00:07:15.170 "bdev_nvme_apply_firmware", 00:07:15.170 "bdev_nvme_detach_controller", 00:07:15.170 "bdev_nvme_get_controllers", 00:07:15.170 "bdev_nvme_attach_controller", 00:07:15.170 "bdev_nvme_set_hotplug", 00:07:15.170 "bdev_nvme_set_options", 00:07:15.170 "bdev_passthru_delete", 00:07:15.170 "bdev_passthru_create", 00:07:15.170 "bdev_lvol_set_parent_bdev", 00:07:15.170 "bdev_lvol_set_parent", 00:07:15.170 "bdev_lvol_check_shallow_copy", 00:07:15.170 "bdev_lvol_start_shallow_copy", 00:07:15.170 "bdev_lvol_grow_lvstore", 00:07:15.170 "bdev_lvol_get_lvols", 00:07:15.170 "bdev_lvol_get_lvstores", 00:07:15.170 "bdev_lvol_delete", 00:07:15.170 "bdev_lvol_set_read_only", 00:07:15.170 "bdev_lvol_resize", 00:07:15.170 "bdev_lvol_decouple_parent", 00:07:15.170 "bdev_lvol_inflate", 00:07:15.170 "bdev_lvol_rename", 00:07:15.170 "bdev_lvol_clone_bdev", 00:07:15.170 "bdev_lvol_clone", 00:07:15.170 "bdev_lvol_snapshot", 00:07:15.170 "bdev_lvol_create", 00:07:15.170 "bdev_lvol_delete_lvstore", 00:07:15.170 "bdev_lvol_rename_lvstore", 00:07:15.170 "bdev_lvol_create_lvstore", 00:07:15.170 "bdev_raid_set_options", 00:07:15.170 "bdev_raid_remove_base_bdev", 00:07:15.170 "bdev_raid_add_base_bdev", 00:07:15.170 "bdev_raid_delete", 00:07:15.170 "bdev_raid_create", 00:07:15.170 "bdev_raid_get_bdevs", 00:07:15.170 "bdev_error_inject_error", 00:07:15.170 "bdev_error_delete", 00:07:15.170 "bdev_error_create", 00:07:15.170 "bdev_split_delete", 00:07:15.170 "bdev_split_create", 00:07:15.170 "bdev_delay_delete", 00:07:15.170 "bdev_delay_create", 00:07:15.170 "bdev_delay_update_latency", 00:07:15.170 "bdev_zone_block_delete", 00:07:15.170 "bdev_zone_block_create", 00:07:15.170 "blobfs_create", 00:07:15.170 "blobfs_detect", 00:07:15.170 "blobfs_set_cache_size", 00:07:15.170 "bdev_aio_delete", 00:07:15.170 "bdev_aio_rescan", 00:07:15.170 "bdev_aio_create", 00:07:15.170 "bdev_ftl_set_property", 00:07:15.170 "bdev_ftl_get_properties", 00:07:15.170 "bdev_ftl_get_stats", 00:07:15.170 "bdev_ftl_unmap", 00:07:15.170 "bdev_ftl_unload", 00:07:15.170 "bdev_ftl_delete", 00:07:15.170 "bdev_ftl_load", 00:07:15.170 "bdev_ftl_create", 00:07:15.170 "bdev_virtio_attach_controller", 00:07:15.170 "bdev_virtio_scsi_get_devices", 00:07:15.170 "bdev_virtio_detach_controller", 00:07:15.170 "bdev_virtio_blk_set_hotplug", 00:07:15.170 "bdev_iscsi_delete", 00:07:15.170 "bdev_iscsi_create", 00:07:15.170 "bdev_iscsi_set_options", 00:07:15.170 "bdev_uring_delete", 00:07:15.170 "bdev_uring_rescan", 00:07:15.170 "bdev_uring_create", 00:07:15.170 "accel_error_inject_error", 00:07:15.170 "ioat_scan_accel_module", 00:07:15.170 "dsa_scan_accel_module", 00:07:15.170 "iaa_scan_accel_module", 00:07:15.170 "vfu_virtio_create_fs_endpoint", 00:07:15.170 "vfu_virtio_create_scsi_endpoint", 00:07:15.170 "vfu_virtio_scsi_remove_target", 00:07:15.170 "vfu_virtio_scsi_add_target", 00:07:15.170 "vfu_virtio_create_blk_endpoint", 00:07:15.170 "vfu_virtio_delete_endpoint", 00:07:15.170 "keyring_file_remove_key", 00:07:15.170 "keyring_file_add_key", 00:07:15.170 "keyring_linux_set_options", 00:07:15.170 "fsdev_aio_delete", 00:07:15.170 "fsdev_aio_create", 00:07:15.170 "iscsi_get_histogram", 00:07:15.170 "iscsi_enable_histogram", 00:07:15.170 "iscsi_set_options", 00:07:15.170 "iscsi_get_auth_groups", 00:07:15.170 "iscsi_auth_group_remove_secret", 00:07:15.170 "iscsi_auth_group_add_secret", 00:07:15.170 "iscsi_delete_auth_group", 00:07:15.170 "iscsi_create_auth_group", 00:07:15.170 "iscsi_set_discovery_auth", 00:07:15.170 "iscsi_get_options", 00:07:15.170 "iscsi_target_node_request_logout", 00:07:15.170 "iscsi_target_node_set_redirect", 00:07:15.170 "iscsi_target_node_set_auth", 00:07:15.170 "iscsi_target_node_add_lun", 00:07:15.170 "iscsi_get_stats", 00:07:15.170 "iscsi_get_connections", 00:07:15.170 "iscsi_portal_group_set_auth", 00:07:15.170 "iscsi_start_portal_group", 00:07:15.170 "iscsi_delete_portal_group", 00:07:15.170 "iscsi_create_portal_group", 00:07:15.170 "iscsi_get_portal_groups", 00:07:15.170 "iscsi_delete_target_node", 00:07:15.170 "iscsi_target_node_remove_pg_ig_maps", 00:07:15.170 "iscsi_target_node_add_pg_ig_maps", 00:07:15.170 "iscsi_create_target_node", 00:07:15.171 "iscsi_get_target_nodes", 00:07:15.171 "iscsi_delete_initiator_group", 00:07:15.171 "iscsi_initiator_group_remove_initiators", 00:07:15.171 "iscsi_initiator_group_add_initiators", 00:07:15.171 "iscsi_create_initiator_group", 00:07:15.171 "iscsi_get_initiator_groups", 00:07:15.171 "nvmf_set_crdt", 00:07:15.171 "nvmf_set_config", 00:07:15.171 "nvmf_set_max_subsystems", 00:07:15.171 "nvmf_stop_mdns_prr", 00:07:15.171 "nvmf_publish_mdns_prr", 00:07:15.171 "nvmf_subsystem_get_listeners", 00:07:15.171 "nvmf_subsystem_get_qpairs", 00:07:15.171 "nvmf_subsystem_get_controllers", 00:07:15.171 "nvmf_get_stats", 00:07:15.171 "nvmf_get_transports", 00:07:15.171 "nvmf_create_transport", 00:07:15.171 "nvmf_get_targets", 00:07:15.171 "nvmf_delete_target", 00:07:15.171 "nvmf_create_target", 00:07:15.171 "nvmf_subsystem_allow_any_host", 00:07:15.171 "nvmf_subsystem_set_keys", 00:07:15.171 "nvmf_subsystem_remove_host", 00:07:15.171 "nvmf_subsystem_add_host", 00:07:15.171 "nvmf_ns_remove_host", 00:07:15.171 "nvmf_ns_add_host", 00:07:15.171 "nvmf_subsystem_remove_ns", 00:07:15.171 "nvmf_subsystem_set_ns_ana_group", 00:07:15.171 "nvmf_subsystem_add_ns", 00:07:15.171 "nvmf_subsystem_listener_set_ana_state", 00:07:15.171 "nvmf_discovery_get_referrals", 00:07:15.171 "nvmf_discovery_remove_referral", 00:07:15.171 "nvmf_discovery_add_referral", 00:07:15.171 "nvmf_subsystem_remove_listener", 00:07:15.171 "nvmf_subsystem_add_listener", 00:07:15.171 "nvmf_delete_subsystem", 00:07:15.171 "nvmf_create_subsystem", 00:07:15.171 "nvmf_get_subsystems", 00:07:15.171 "env_dpdk_get_mem_stats", 00:07:15.171 "nbd_get_disks", 00:07:15.171 "nbd_stop_disk", 00:07:15.171 "nbd_start_disk", 00:07:15.171 "ublk_recover_disk", 00:07:15.171 "ublk_get_disks", 00:07:15.171 "ublk_stop_disk", 00:07:15.171 "ublk_start_disk", 00:07:15.171 "ublk_destroy_target", 00:07:15.171 "ublk_create_target", 00:07:15.171 "virtio_blk_create_transport", 00:07:15.171 "virtio_blk_get_transports", 00:07:15.171 "vhost_controller_set_coalescing", 00:07:15.171 "vhost_get_controllers", 00:07:15.171 "vhost_delete_controller", 00:07:15.171 "vhost_create_blk_controller", 00:07:15.171 "vhost_scsi_controller_remove_target", 00:07:15.171 "vhost_scsi_controller_add_target", 00:07:15.171 "vhost_start_scsi_controller", 00:07:15.171 "vhost_create_scsi_controller", 00:07:15.171 "thread_set_cpumask", 00:07:15.171 "scheduler_set_options", 00:07:15.171 "framework_get_governor", 00:07:15.171 "framework_get_scheduler", 00:07:15.171 "framework_set_scheduler", 00:07:15.171 "framework_get_reactors", 00:07:15.171 "thread_get_io_channels", 00:07:15.171 "thread_get_pollers", 00:07:15.171 "thread_get_stats", 00:07:15.171 "framework_monitor_context_switch", 00:07:15.171 "spdk_kill_instance", 00:07:15.171 "log_enable_timestamps", 00:07:15.171 "log_get_flags", 00:07:15.171 "log_clear_flag", 00:07:15.171 "log_set_flag", 00:07:15.171 "log_get_level", 00:07:15.171 "log_set_level", 00:07:15.171 "log_get_print_level", 00:07:15.171 "log_set_print_level", 00:07:15.171 "framework_enable_cpumask_locks", 00:07:15.171 "framework_disable_cpumask_locks", 00:07:15.171 "framework_wait_init", 00:07:15.171 "framework_start_init", 00:07:15.171 "scsi_get_devices", 00:07:15.171 "bdev_get_histogram", 00:07:15.171 "bdev_enable_histogram", 00:07:15.171 "bdev_set_qos_limit", 00:07:15.171 "bdev_set_qd_sampling_period", 00:07:15.171 "bdev_get_bdevs", 00:07:15.171 "bdev_reset_iostat", 00:07:15.171 "bdev_get_iostat", 00:07:15.171 "bdev_examine", 00:07:15.171 "bdev_wait_for_examine", 00:07:15.171 "bdev_set_options", 00:07:15.171 "accel_get_stats", 00:07:15.171 "accel_set_options", 00:07:15.171 "accel_set_driver", 00:07:15.171 "accel_crypto_key_destroy", 00:07:15.171 "accel_crypto_keys_get", 00:07:15.171 "accel_crypto_key_create", 00:07:15.171 "accel_assign_opc", 00:07:15.171 "accel_get_module_info", 00:07:15.171 "accel_get_opc_assignments", 00:07:15.171 "vmd_rescan", 00:07:15.171 "vmd_remove_device", 00:07:15.171 "vmd_enable", 00:07:15.171 "sock_get_default_impl", 00:07:15.171 "sock_set_default_impl", 00:07:15.171 "sock_impl_set_options", 00:07:15.171 "sock_impl_get_options", 00:07:15.171 "iobuf_get_stats", 00:07:15.171 "iobuf_set_options", 00:07:15.171 "keyring_get_keys", 00:07:15.171 "vfu_tgt_set_base_path", 00:07:15.171 "framework_get_pci_devices", 00:07:15.171 "framework_get_config", 00:07:15.171 "framework_get_subsystems", 00:07:15.171 "fsdev_set_opts", 00:07:15.171 "fsdev_get_opts", 00:07:15.171 "trace_get_info", 00:07:15.171 "trace_get_tpoint_group_mask", 00:07:15.171 "trace_disable_tpoint_group", 00:07:15.171 "trace_enable_tpoint_group", 00:07:15.171 "trace_clear_tpoint_mask", 00:07:15.171 "trace_set_tpoint_mask", 00:07:15.171 "notify_get_notifications", 00:07:15.171 "notify_get_types", 00:07:15.171 "spdk_get_version", 00:07:15.171 "rpc_get_methods" 00:07:15.171 ] 00:07:15.171 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.171 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:15.171 01:28:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 72821 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 72821 ']' 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 72821 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72821 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:15.171 killing process with pid 72821 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72821' 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 72821 00:07:15.171 01:28:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 72821 00:07:15.430 00:07:15.430 real 0m1.167s 00:07:15.430 user 0m2.054s 00:07:15.430 sys 0m0.370s 00:07:15.430 01:28:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.430 01:28:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.430 ************************************ 00:07:15.430 END TEST spdkcli_tcp 00:07:15.430 ************************************ 00:07:15.430 01:28:46 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:15.430 01:28:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.430 01:28:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.430 01:28:46 -- common/autotest_common.sh@10 -- # set +x 00:07:15.430 ************************************ 00:07:15.430 START TEST dpdk_mem_utility 00:07:15.430 ************************************ 00:07:15.430 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:15.690 * Looking for test storage... 00:07:15.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:15.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.690 01:28:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.690 --rc genhtml_branch_coverage=1 00:07:15.690 --rc genhtml_function_coverage=1 00:07:15.690 --rc genhtml_legend=1 00:07:15.690 --rc geninfo_all_blocks=1 00:07:15.690 --rc geninfo_unexecuted_blocks=1 00:07:15.690 00:07:15.690 ' 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.690 --rc genhtml_branch_coverage=1 00:07:15.690 --rc genhtml_function_coverage=1 00:07:15.690 --rc genhtml_legend=1 00:07:15.690 --rc geninfo_all_blocks=1 00:07:15.690 --rc geninfo_unexecuted_blocks=1 00:07:15.690 00:07:15.690 ' 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.690 --rc genhtml_branch_coverage=1 00:07:15.690 --rc genhtml_function_coverage=1 00:07:15.690 --rc genhtml_legend=1 00:07:15.690 --rc geninfo_all_blocks=1 00:07:15.690 --rc geninfo_unexecuted_blocks=1 00:07:15.690 00:07:15.690 ' 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.690 --rc genhtml_branch_coverage=1 00:07:15.690 --rc genhtml_function_coverage=1 00:07:15.690 --rc genhtml_legend=1 00:07:15.690 --rc geninfo_all_blocks=1 00:07:15.690 --rc geninfo_unexecuted_blocks=1 00:07:15.690 00:07:15.690 ' 00:07:15.690 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:15.690 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=72913 00:07:15.690 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 72913 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 72913 ']' 00:07:15.690 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.690 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:15.690 [2024-12-16 01:28:46.328402] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:15.690 [2024-12-16 01:28:46.328957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72913 ] 00:07:15.949 [2024-12-16 01:28:46.478347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.949 [2024-12-16 01:28:46.498041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.949 [2024-12-16 01:28:46.533948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:16.210 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.210 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:16.210 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:16.210 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:16.210 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.210 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:16.210 { 00:07:16.210 "filename": "/tmp/spdk_mem_dump.txt" 00:07:16.210 } 00:07:16.210 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.210 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:16.210 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:16.210 1 heaps totaling size 818.000000 MiB 00:07:16.210 size: 818.000000 MiB heap id: 0 00:07:16.210 end heaps---------- 00:07:16.210 9 mempools totaling size 603.782043 MiB 00:07:16.210 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:16.210 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:16.210 size: 100.555481 MiB name: bdev_io_72913 00:07:16.210 size: 50.003479 MiB name: msgpool_72913 00:07:16.210 size: 36.509338 MiB name: fsdev_io_72913 00:07:16.210 size: 21.763794 MiB name: PDU_Pool 00:07:16.210 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:16.210 size: 4.133484 MiB name: evtpool_72913 00:07:16.210 size: 0.026123 MiB name: Session_Pool 00:07:16.210 end mempools------- 00:07:16.210 6 memzones totaling size 4.142822 MiB 00:07:16.210 size: 1.000366 MiB name: RG_ring_0_72913 00:07:16.210 size: 1.000366 MiB name: RG_ring_1_72913 00:07:16.210 size: 1.000366 MiB name: RG_ring_4_72913 00:07:16.210 size: 1.000366 MiB name: RG_ring_5_72913 00:07:16.210 size: 0.125366 MiB name: RG_ring_2_72913 00:07:16.210 size: 0.015991 MiB name: RG_ring_3_72913 00:07:16.210 end memzones------- 00:07:16.210 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:16.210 heap id: 0 total size: 818.000000 MiB number of busy elements: 329 number of free elements: 15 00:07:16.210 list of free elements. size: 10.800293 MiB 00:07:16.210 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:16.210 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:16.210 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:16.210 element at address: 0x200000400000 with size: 0.993958 MiB 00:07:16.210 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:16.210 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:16.211 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:16.211 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:16.211 element at address: 0x20001ae00000 with size: 0.565491 MiB 00:07:16.211 element at address: 0x20000a600000 with size: 0.488892 MiB 00:07:16.211 element at address: 0x200000c00000 with size: 0.486267 MiB 00:07:16.211 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:16.211 element at address: 0x200003e00000 with size: 0.480286 MiB 00:07:16.211 element at address: 0x200028200000 with size: 0.395752 MiB 00:07:16.211 element at address: 0x200000800000 with size: 0.351746 MiB 00:07:16.211 list of standard malloc elements. size: 199.270813 MiB 00:07:16.211 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:16.211 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:16.211 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:16.211 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:16.211 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:16.211 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:16.211 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:16.211 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:16.211 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:16.211 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000085e580 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087e840 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087e900 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f080 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f140 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f200 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f380 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f440 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f500 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:16.211 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:16.211 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae90c40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae90d00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae90dc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae90e80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae90f40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91000 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae910c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91180 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:16.212 element at address: 0x200028265500 with size: 0.000183 MiB 00:07:16.212 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c480 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c540 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c600 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c780 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c840 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c900 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d080 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d140 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d200 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d380 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d440 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d500 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d680 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d740 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d800 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826d980 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826da40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826db00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826de00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826df80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e040 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e100 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e280 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e340 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e400 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e580 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e640 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e700 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e880 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826e940 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:07:16.212 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f000 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f180 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f240 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f300 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f480 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f540 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f600 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f780 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f840 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f900 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:16.213 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:16.213 list of memzone associated elements. size: 607.928894 MiB 00:07:16.213 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:16.213 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:16.213 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:16.213 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:16.213 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:16.213 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_72913_0 00:07:16.213 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:16.213 associated memzone info: size: 48.002930 MiB name: MP_msgpool_72913_0 00:07:16.213 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:16.213 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_72913_0 00:07:16.213 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:16.213 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:16.213 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:16.213 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:16.213 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:16.213 associated memzone info: size: 3.000122 MiB name: MP_evtpool_72913_0 00:07:16.213 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:16.213 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_72913 00:07:16.213 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:16.213 associated memzone info: size: 1.007996 MiB name: MP_evtpool_72913 00:07:16.213 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:16.213 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:16.213 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:16.213 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:16.213 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:16.213 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:16.213 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:16.213 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:16.213 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:16.213 associated memzone info: size: 1.000366 MiB name: RG_ring_0_72913 00:07:16.213 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:16.213 associated memzone info: size: 1.000366 MiB name: RG_ring_1_72913 00:07:16.213 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:16.213 associated memzone info: size: 1.000366 MiB name: RG_ring_4_72913 00:07:16.213 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:16.213 associated memzone info: size: 1.000366 MiB name: RG_ring_5_72913 00:07:16.213 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:16.213 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_72913 00:07:16.213 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:16.213 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_72913 00:07:16.213 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:16.213 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:16.213 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:16.213 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:16.213 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:16.213 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:16.213 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:16.213 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_72913 00:07:16.213 element at address: 0x20000085e640 with size: 0.125488 MiB 00:07:16.213 associated memzone info: size: 0.125366 MiB name: RG_ring_2_72913 00:07:16.213 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:16.213 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:16.213 element at address: 0x200028265680 with size: 0.023743 MiB 00:07:16.213 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:16.213 element at address: 0x20000085a380 with size: 0.016113 MiB 00:07:16.213 associated memzone info: size: 0.015991 MiB name: RG_ring_3_72913 00:07:16.213 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:07:16.213 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:16.213 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:07:16.213 associated memzone info: size: 0.000183 MiB name: MP_msgpool_72913 00:07:16.213 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:16.213 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_72913 00:07:16.213 element at address: 0x20000085a180 with size: 0.000305 MiB 00:07:16.213 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_72913 00:07:16.213 element at address: 0x20002826c280 with size: 0.000305 MiB 00:07:16.213 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:16.213 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:16.213 01:28:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 72913 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 72913 ']' 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 72913 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72913 00:07:16.213 killing process with pid 72913 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72913' 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 72913 00:07:16.213 01:28:46 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 72913 00:07:16.473 ************************************ 00:07:16.473 END TEST dpdk_mem_utility 00:07:16.473 ************************************ 00:07:16.473 00:07:16.473 real 0m1.023s 00:07:16.473 user 0m1.077s 00:07:16.473 sys 0m0.327s 00:07:16.473 01:28:47 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.473 01:28:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:16.473 01:28:47 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:16.473 01:28:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.473 01:28:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.473 01:28:47 -- common/autotest_common.sh@10 -- # set +x 00:07:16.473 ************************************ 00:07:16.473 START TEST event 00:07:16.473 ************************************ 00:07:16.473 01:28:47 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:16.732 * Looking for test storage... 00:07:16.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:16.732 01:28:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.732 01:28:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.732 01:28:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.732 01:28:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.732 01:28:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.732 01:28:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.732 01:28:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.732 01:28:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.732 01:28:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.732 01:28:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.732 01:28:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.732 01:28:47 event -- scripts/common.sh@344 -- # case "$op" in 00:07:16.732 01:28:47 event -- scripts/common.sh@345 -- # : 1 00:07:16.732 01:28:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.732 01:28:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.732 01:28:47 event -- scripts/common.sh@365 -- # decimal 1 00:07:16.732 01:28:47 event -- scripts/common.sh@353 -- # local d=1 00:07:16.732 01:28:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.732 01:28:47 event -- scripts/common.sh@355 -- # echo 1 00:07:16.732 01:28:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.732 01:28:47 event -- scripts/common.sh@366 -- # decimal 2 00:07:16.732 01:28:47 event -- scripts/common.sh@353 -- # local d=2 00:07:16.732 01:28:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.732 01:28:47 event -- scripts/common.sh@355 -- # echo 2 00:07:16.732 01:28:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.732 01:28:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.732 01:28:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.732 01:28:47 event -- scripts/common.sh@368 -- # return 0 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:16.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.732 --rc genhtml_branch_coverage=1 00:07:16.732 --rc genhtml_function_coverage=1 00:07:16.732 --rc genhtml_legend=1 00:07:16.732 --rc geninfo_all_blocks=1 00:07:16.732 --rc geninfo_unexecuted_blocks=1 00:07:16.732 00:07:16.732 ' 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:16.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.732 --rc genhtml_branch_coverage=1 00:07:16.732 --rc genhtml_function_coverage=1 00:07:16.732 --rc genhtml_legend=1 00:07:16.732 --rc geninfo_all_blocks=1 00:07:16.732 --rc geninfo_unexecuted_blocks=1 00:07:16.732 00:07:16.732 ' 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:16.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.732 --rc genhtml_branch_coverage=1 00:07:16.732 --rc genhtml_function_coverage=1 00:07:16.732 --rc genhtml_legend=1 00:07:16.732 --rc geninfo_all_blocks=1 00:07:16.732 --rc geninfo_unexecuted_blocks=1 00:07:16.732 00:07:16.732 ' 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:16.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.732 --rc genhtml_branch_coverage=1 00:07:16.732 --rc genhtml_function_coverage=1 00:07:16.732 --rc genhtml_legend=1 00:07:16.732 --rc geninfo_all_blocks=1 00:07:16.732 --rc geninfo_unexecuted_blocks=1 00:07:16.732 00:07:16.732 ' 00:07:16.732 01:28:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:16.732 01:28:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:16.732 01:28:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:16.732 01:28:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.732 01:28:47 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.732 ************************************ 00:07:16.732 START TEST event_perf 00:07:16.732 ************************************ 00:07:16.732 01:28:47 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:16.732 Running I/O for 1 seconds...[2024-12-16 01:28:47.333216] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:16.732 [2024-12-16 01:28:47.333433] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72985 ] 00:07:16.992 [2024-12-16 01:28:47.472534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.992 [2024-12-16 01:28:47.493681] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.992 [2024-12-16 01:28:47.493781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.992 [2024-12-16 01:28:47.493845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.992 Running I/O for 1 seconds...[2024-12-16 01:28:47.493846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.928 00:07:17.928 lcore 0: 197946 00:07:17.928 lcore 1: 197946 00:07:17.928 lcore 2: 197947 00:07:17.928 lcore 3: 197946 00:07:17.928 done. 00:07:17.928 00:07:17.928 real 0m1.216s 00:07:17.928 user 0m4.054s 00:07:17.928 sys 0m0.043s 00:07:17.928 01:28:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.928 01:28:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.928 ************************************ 00:07:17.928 END TEST event_perf 00:07:17.928 ************************************ 00:07:17.928 01:28:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:17.928 01:28:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:17.928 01:28:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.928 01:28:48 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.187 ************************************ 00:07:18.187 START TEST event_reactor 00:07:18.187 ************************************ 00:07:18.187 01:28:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:18.187 [2024-12-16 01:28:48.605207] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:18.187 [2024-12-16 01:28:48.605467] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73018 ] 00:07:18.187 [2024-12-16 01:28:48.749547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.188 [2024-12-16 01:28:48.767681] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.149 test_start 00:07:19.149 oneshot 00:07:19.149 tick 100 00:07:19.149 tick 100 00:07:19.149 tick 250 00:07:19.149 tick 100 00:07:19.149 tick 100 00:07:19.149 tick 100 00:07:19.149 tick 250 00:07:19.149 tick 500 00:07:19.149 tick 100 00:07:19.149 tick 100 00:07:19.149 tick 250 00:07:19.149 tick 100 00:07:19.149 tick 100 00:07:19.149 test_end 00:07:19.149 ************************************ 00:07:19.149 END TEST event_reactor 00:07:19.149 ************************************ 00:07:19.150 00:07:19.150 real 0m1.213s 00:07:19.150 user 0m1.069s 00:07:19.150 sys 0m0.039s 00:07:19.150 01:28:49 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.150 01:28:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:19.409 01:28:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:19.409 01:28:49 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:19.409 01:28:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.409 01:28:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:19.409 ************************************ 00:07:19.409 START TEST event_reactor_perf 00:07:19.409 ************************************ 00:07:19.409 01:28:49 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:19.409 [2024-12-16 01:28:49.867932] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:19.409 [2024-12-16 01:28:49.868030] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73048 ] 00:07:19.409 [2024-12-16 01:28:50.007286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.409 [2024-12-16 01:28:50.028436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.786 test_start 00:07:20.786 test_end 00:07:20.786 Performance: 438536 events per second 00:07:20.786 ************************************ 00:07:20.786 END TEST event_reactor_perf 00:07:20.786 ************************************ 00:07:20.786 00:07:20.786 real 0m1.205s 00:07:20.786 user 0m1.066s 00:07:20.786 sys 0m0.033s 00:07:20.786 01:28:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.786 01:28:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.786 01:28:51 event -- event/event.sh@49 -- # uname -s 00:07:20.786 01:28:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:20.786 01:28:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:20.786 01:28:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.786 01:28:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.786 01:28:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.786 ************************************ 00:07:20.786 START TEST event_scheduler 00:07:20.786 ************************************ 00:07:20.786 01:28:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:20.786 * Looking for test storage... 00:07:20.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:20.786 01:28:51 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.786 01:28:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.786 01:28:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.786 01:28:51 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.786 01:28:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.786 01:28:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:20.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.787 01:28:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.787 --rc genhtml_branch_coverage=1 00:07:20.787 --rc genhtml_function_coverage=1 00:07:20.787 --rc genhtml_legend=1 00:07:20.787 --rc geninfo_all_blocks=1 00:07:20.787 --rc geninfo_unexecuted_blocks=1 00:07:20.787 00:07:20.787 ' 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.787 --rc genhtml_branch_coverage=1 00:07:20.787 --rc genhtml_function_coverage=1 00:07:20.787 --rc genhtml_legend=1 00:07:20.787 --rc geninfo_all_blocks=1 00:07:20.787 --rc geninfo_unexecuted_blocks=1 00:07:20.787 00:07:20.787 ' 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.787 --rc genhtml_branch_coverage=1 00:07:20.787 --rc genhtml_function_coverage=1 00:07:20.787 --rc genhtml_legend=1 00:07:20.787 --rc geninfo_all_blocks=1 00:07:20.787 --rc geninfo_unexecuted_blocks=1 00:07:20.787 00:07:20.787 ' 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.787 --rc genhtml_branch_coverage=1 00:07:20.787 --rc genhtml_function_coverage=1 00:07:20.787 --rc genhtml_legend=1 00:07:20.787 --rc geninfo_all_blocks=1 00:07:20.787 --rc geninfo_unexecuted_blocks=1 00:07:20.787 00:07:20.787 ' 00:07:20.787 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:20.787 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=73123 00:07:20.787 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:20.787 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:20.787 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 73123 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 73123 ']' 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.787 01:28:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:20.787 [2024-12-16 01:28:51.369374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:20.787 [2024-12-16 01:28:51.369644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73123 ] 00:07:21.046 [2024-12-16 01:28:51.511008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.046 [2024-12-16 01:28:51.533685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.046 [2024-12-16 01:28:51.533824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.046 [2024-12-16 01:28:51.533951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.046 [2024-12-16 01:28:51.533952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:21.046 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:21.046 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:21.046 POWER: Cannot set governor of lcore 0 to userspace 00:07:21.046 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:21.046 POWER: Cannot set governor of lcore 0 to performance 00:07:21.046 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:21.046 POWER: Cannot set governor of lcore 0 to userspace 00:07:21.046 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:21.046 POWER: Cannot set governor of lcore 0 to userspace 00:07:21.046 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:21.046 POWER: Unable to set Power Management Environment for lcore 0 00:07:21.046 [2024-12-16 01:28:51.589580] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:21.046 [2024-12-16 01:28:51.589593] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:21.046 [2024-12-16 01:28:51.589602] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:21.046 [2024-12-16 01:28:51.589614] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:21.046 [2024-12-16 01:28:51.589621] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:21.046 [2024-12-16 01:28:51.589628] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.046 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:21.046 [2024-12-16 01:28:51.623626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.046 [2024-12-16 01:28:51.640185] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:21.046 01:28:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.047 01:28:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:21.047 01:28:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.047 01:28:51 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.047 01:28:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:21.047 ************************************ 00:07:21.047 START TEST scheduler_create_thread 00:07:21.047 ************************************ 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.047 2 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.047 3 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.047 4 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.047 5 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.047 6 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.047 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.306 7 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.306 8 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.306 9 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.306 10 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.306 01:28:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.874 01:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.874 01:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:21.874 01:28:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:21.874 01:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.874 01:28:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.810 ************************************ 00:07:22.810 END TEST scheduler_create_thread 00:07:22.810 ************************************ 00:07:22.810 01:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.810 00:07:22.810 real 0m1.748s 00:07:22.810 user 0m0.022s 00:07:22.810 sys 0m0.003s 00:07:22.810 01:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.810 01:28:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:22.810 01:28:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:22.810 01:28:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 73123 00:07:22.810 01:28:53 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 73123 ']' 00:07:22.810 01:28:53 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 73123 00:07:22.810 01:28:53 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:22.810 01:28:53 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.810 01:28:53 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73123 00:07:23.069 killing process with pid 73123 00:07:23.069 01:28:53 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:23.069 01:28:53 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:23.069 01:28:53 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73123' 00:07:23.069 01:28:53 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 73123 00:07:23.069 01:28:53 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 73123 00:07:23.328 [2024-12-16 01:28:53.880395] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:23.588 00:07:23.588 real 0m2.885s 00:07:23.588 user 0m3.560s 00:07:23.588 sys 0m0.315s 00:07:23.588 01:28:53 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.588 01:28:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:23.588 ************************************ 00:07:23.588 END TEST event_scheduler 00:07:23.588 ************************************ 00:07:23.588 01:28:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:23.588 01:28:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:23.588 01:28:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:23.588 01:28:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.588 01:28:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.588 ************************************ 00:07:23.588 START TEST app_repeat 00:07:23.588 ************************************ 00:07:23.588 01:28:54 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:23.588 Process app_repeat pid: 73193 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=73193 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73193' 00:07:23.588 spdk_app_start Round 0 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:23.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:23.588 01:28:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73193 /var/tmp/spdk-nbd.sock 00:07:23.588 01:28:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73193 ']' 00:07:23.588 01:28:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:23.588 01:28:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.588 01:28:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:23.588 01:28:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.588 01:28:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:23.588 [2024-12-16 01:28:54.090014] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:23.588 [2024-12-16 01:28:54.090260] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73193 ] 00:07:23.588 [2024-12-16 01:28:54.231427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.847 [2024-12-16 01:28:54.253861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.847 [2024-12-16 01:28:54.253870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.847 [2024-12-16 01:28:54.283294] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.847 01:28:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.847 01:28:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:23.847 01:28:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.106 Malloc0 00:07:24.106 01:28:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.364 Malloc1 00:07:24.364 01:28:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.364 01:28:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.364 01:28:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.364 01:28:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:24.364 01:28:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.364 01:28:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:24.364 01:28:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.364 01:28:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.365 01:28:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:24.624 /dev/nbd0 00:07:24.624 01:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.624 01:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.624 1+0 records in 00:07:24.624 1+0 records out 00:07:24.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467745 s, 8.8 MB/s 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.624 01:28:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:24.624 01:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.624 01:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.624 01:28:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:24.883 /dev/nbd1 00:07:24.883 01:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.883 01:28:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.884 1+0 records in 00:07:24.884 1+0 records out 00:07:24.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288853 s, 14.2 MB/s 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.884 01:28:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:24.884 01:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.884 01:28:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.884 01:28:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.884 01:28:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.884 01:28:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.451 01:28:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:25.451 { 00:07:25.452 "nbd_device": "/dev/nbd0", 00:07:25.452 "bdev_name": "Malloc0" 00:07:25.452 }, 00:07:25.452 { 00:07:25.452 "nbd_device": "/dev/nbd1", 00:07:25.452 "bdev_name": "Malloc1" 00:07:25.452 } 00:07:25.452 ]' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:25.452 { 00:07:25.452 "nbd_device": "/dev/nbd0", 00:07:25.452 "bdev_name": "Malloc0" 00:07:25.452 }, 00:07:25.452 { 00:07:25.452 "nbd_device": "/dev/nbd1", 00:07:25.452 "bdev_name": "Malloc1" 00:07:25.452 } 00:07:25.452 ]' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:25.452 /dev/nbd1' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:25.452 /dev/nbd1' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:25.452 256+0 records in 00:07:25.452 256+0 records out 00:07:25.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105567 s, 99.3 MB/s 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:25.452 256+0 records in 00:07:25.452 256+0 records out 00:07:25.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226633 s, 46.3 MB/s 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:25.452 256+0 records in 00:07:25.452 256+0 records out 00:07:25.452 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265278 s, 39.5 MB/s 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.452 01:28:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.711 01:28:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.970 01:28:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.229 01:28:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:26.229 01:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.229 01:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:26.488 01:28:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:26.488 01:28:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:26.747 01:28:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:26.747 [2024-12-16 01:28:57.238872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.747 [2024-12-16 01:28:57.261324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.747 [2024-12-16 01:28:57.261344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.747 [2024-12-16 01:28:57.290524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.747 [2024-12-16 01:28:57.290658] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:26.747 [2024-12-16 01:28:57.290674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:30.034 spdk_app_start Round 1 00:07:30.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.034 01:29:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:30.034 01:29:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:30.034 01:29:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73193 /var/tmp/spdk-nbd.sock 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73193 ']' 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.034 01:29:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:30.034 01:29:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.293 Malloc0 00:07:30.293 01:29:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.293 Malloc1 00:07:30.552 01:29:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.552 01:29:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:30.552 /dev/nbd0 00:07:30.811 01:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:30.811 01:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.811 1+0 records in 00:07:30.811 1+0 records out 00:07:30.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204701 s, 20.0 MB/s 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:30.811 01:29:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:30.811 01:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.811 01:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.811 01:29:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:31.070 /dev/nbd1 00:07:31.070 01:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:31.070 01:29:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.070 1+0 records in 00:07:31.070 1+0 records out 00:07:31.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000229179 s, 17.9 MB/s 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:31.070 01:29:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:31.070 01:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.070 01:29:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.070 01:29:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.070 01:29:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.070 01:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.329 01:29:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:31.329 { 00:07:31.329 "nbd_device": "/dev/nbd0", 00:07:31.329 "bdev_name": "Malloc0" 00:07:31.329 }, 00:07:31.329 { 00:07:31.329 "nbd_device": "/dev/nbd1", 00:07:31.329 "bdev_name": "Malloc1" 00:07:31.329 } 00:07:31.329 ]' 00:07:31.329 01:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:31.329 { 00:07:31.329 "nbd_device": "/dev/nbd0", 00:07:31.329 "bdev_name": "Malloc0" 00:07:31.329 }, 00:07:31.329 { 00:07:31.329 "nbd_device": "/dev/nbd1", 00:07:31.329 "bdev_name": "Malloc1" 00:07:31.329 } 00:07:31.330 ]' 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:31.330 /dev/nbd1' 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:31.330 /dev/nbd1' 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:31.330 256+0 records in 00:07:31.330 256+0 records out 00:07:31.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104414 s, 100 MB/s 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:31.330 256+0 records in 00:07:31.330 256+0 records out 00:07:31.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244432 s, 42.9 MB/s 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.330 01:29:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:31.589 256+0 records in 00:07:31.589 256+0 records out 00:07:31.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268582 s, 39.0 MB/s 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.589 01:29:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.589 01:29:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:31.848 01:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:31.848 01:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.849 01:29:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.108 01:29:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:32.367 01:29:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:32.367 01:29:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:32.626 01:29:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:32.885 [2024-12-16 01:29:03.325215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.885 [2024-12-16 01:29:03.346313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.885 [2024-12-16 01:29:03.346324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.885 [2024-12-16 01:29:03.377934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.885 [2024-12-16 01:29:03.378032] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:32.885 [2024-12-16 01:29:03.378062] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.173 spdk_app_start Round 2 00:07:36.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.173 01:29:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:36.173 01:29:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:36.173 01:29:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 73193 /var/tmp/spdk-nbd.sock 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73193 ']' 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.173 01:29:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:36.173 01:29:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:36.173 Malloc0 00:07:36.173 01:29:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:36.432 Malloc1 00:07:36.432 01:29:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:36.432 01:29:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:36.433 01:29:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:36.433 01:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:36.433 01:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.433 01:29:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:36.692 /dev/nbd0 00:07:36.692 01:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:36.692 01:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:36.692 1+0 records in 00:07:36.692 1+0 records out 00:07:36.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228457 s, 17.9 MB/s 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.692 01:29:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:36.692 01:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.692 01:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.692 01:29:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:36.950 /dev/nbd1 00:07:36.951 01:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:36.951 01:29:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:36.951 1+0 records in 00:07:36.951 1+0 records out 00:07:36.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383949 s, 10.7 MB/s 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:36.951 01:29:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:36.951 01:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:36.951 01:29:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:36.951 01:29:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.951 01:29:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.951 01:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.518 { 00:07:37.518 "nbd_device": "/dev/nbd0", 00:07:37.518 "bdev_name": "Malloc0" 00:07:37.518 }, 00:07:37.518 { 00:07:37.518 "nbd_device": "/dev/nbd1", 00:07:37.518 "bdev_name": "Malloc1" 00:07:37.518 } 00:07:37.518 ]' 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.518 { 00:07:37.518 "nbd_device": "/dev/nbd0", 00:07:37.518 "bdev_name": "Malloc0" 00:07:37.518 }, 00:07:37.518 { 00:07:37.518 "nbd_device": "/dev/nbd1", 00:07:37.518 "bdev_name": "Malloc1" 00:07:37.518 } 00:07:37.518 ]' 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:37.518 /dev/nbd1' 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:37.518 /dev/nbd1' 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:37.518 01:29:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:37.518 256+0 records in 00:07:37.518 256+0 records out 00:07:37.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474845 s, 221 MB/s 00:07:37.519 01:29:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.519 01:29:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:37.519 256+0 records in 00:07:37.519 256+0 records out 00:07:37.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253183 s, 41.4 MB/s 00:07:37.519 01:29:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:37.519 256+0 records in 00:07:37.519 256+0 records out 00:07:37.519 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303761 s, 34.5 MB/s 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.519 01:29:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.779 01:29:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:38.039 01:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:38.298 01:29:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:38.298 01:29:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:38.866 01:29:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:38.866 [2024-12-16 01:29:09.330001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.866 [2024-12-16 01:29:09.350595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.866 [2024-12-16 01:29:09.350606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.866 [2024-12-16 01:29:09.379720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.866 [2024-12-16 01:29:09.379833] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:38.866 [2024-12-16 01:29:09.379846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:42.156 01:29:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 73193 /var/tmp/spdk-nbd.sock 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 73193 ']' 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:42.156 01:29:12 event.app_repeat -- event/event.sh@39 -- # killprocess 73193 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 73193 ']' 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 73193 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73193 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.156 killing process with pid 73193 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73193' 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@973 -- # kill 73193 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@978 -- # wait 73193 00:07:42.156 spdk_app_start is called in Round 0. 00:07:42.156 Shutdown signal received, stop current app iteration 00:07:42.156 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:07:42.156 spdk_app_start is called in Round 1. 00:07:42.156 Shutdown signal received, stop current app iteration 00:07:42.156 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:07:42.156 spdk_app_start is called in Round 2. 00:07:42.156 Shutdown signal received, stop current app iteration 00:07:42.156 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:07:42.156 spdk_app_start is called in Round 3. 00:07:42.156 Shutdown signal received, stop current app iteration 00:07:42.156 01:29:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:42.156 01:29:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:42.156 00:07:42.156 real 0m18.619s 00:07:42.156 user 0m42.853s 00:07:42.156 sys 0m2.640s 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.156 01:29:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:42.156 ************************************ 00:07:42.156 END TEST app_repeat 00:07:42.156 ************************************ 00:07:42.156 01:29:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:42.156 01:29:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:42.156 01:29:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.156 01:29:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.156 01:29:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.156 ************************************ 00:07:42.156 START TEST cpu_locks 00:07:42.156 ************************************ 00:07:42.156 01:29:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:42.156 * Looking for test storage... 00:07:42.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.416 01:29:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.416 --rc genhtml_branch_coverage=1 00:07:42.416 --rc genhtml_function_coverage=1 00:07:42.416 --rc genhtml_legend=1 00:07:42.416 --rc geninfo_all_blocks=1 00:07:42.416 --rc geninfo_unexecuted_blocks=1 00:07:42.416 00:07:42.416 ' 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.416 --rc genhtml_branch_coverage=1 00:07:42.416 --rc genhtml_function_coverage=1 00:07:42.416 --rc genhtml_legend=1 00:07:42.416 --rc geninfo_all_blocks=1 00:07:42.416 --rc geninfo_unexecuted_blocks=1 00:07:42.416 00:07:42.416 ' 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.416 --rc genhtml_branch_coverage=1 00:07:42.416 --rc genhtml_function_coverage=1 00:07:42.416 --rc genhtml_legend=1 00:07:42.416 --rc geninfo_all_blocks=1 00:07:42.416 --rc geninfo_unexecuted_blocks=1 00:07:42.416 00:07:42.416 ' 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.416 --rc genhtml_branch_coverage=1 00:07:42.416 --rc genhtml_function_coverage=1 00:07:42.416 --rc genhtml_legend=1 00:07:42.416 --rc geninfo_all_blocks=1 00:07:42.416 --rc geninfo_unexecuted_blocks=1 00:07:42.416 00:07:42.416 ' 00:07:42.416 01:29:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:42.416 01:29:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:42.416 01:29:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:42.416 01:29:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.416 01:29:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.416 ************************************ 00:07:42.416 START TEST default_locks 00:07:42.416 ************************************ 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=73641 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 73641 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 73641 ']' 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.416 01:29:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.416 [2024-12-16 01:29:13.022672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:42.416 [2024-12-16 01:29:13.022790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73641 ] 00:07:42.675 [2024-12-16 01:29:13.173198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.676 [2024-12-16 01:29:13.195796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.676 [2024-12-16 01:29:13.232011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.935 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.935 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:42.935 01:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 73641 00:07:42.935 01:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.935 01:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 73641 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 73641 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 73641 ']' 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 73641 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73641 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.194 killing process with pid 73641 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73641' 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 73641 00:07:43.194 01:29:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 73641 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 73641 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73641 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 73641 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 73641 ']' 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.453 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.453 ERROR: process (pid: 73641) is no longer running 00:07:43.453 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73641) - No such process 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:43.454 00:07:43.454 real 0m1.097s 00:07:43.454 user 0m1.165s 00:07:43.454 sys 0m0.431s 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.454 01:29:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.454 ************************************ 00:07:43.454 END TEST default_locks 00:07:43.454 ************************************ 00:07:43.454 01:29:14 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:43.454 01:29:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.454 01:29:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.454 01:29:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:43.454 ************************************ 00:07:43.454 START TEST default_locks_via_rpc 00:07:43.454 ************************************ 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=73680 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 73680 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73680 ']' 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.454 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.713 [2024-12-16 01:29:14.156961] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:43.713 [2024-12-16 01:29:14.157073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73680 ] 00:07:43.713 [2024-12-16 01:29:14.292662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.713 [2024-12-16 01:29:14.314187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.713 [2024-12-16 01:29:14.354547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 73680 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 73680 00:07:43.973 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 73680 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 73680 ']' 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 73680 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73680 00:07:44.541 killing process with pid 73680 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73680' 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 73680 00:07:44.541 01:29:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 73680 00:07:44.541 00:07:44.541 real 0m1.092s 00:07:44.541 user 0m1.146s 00:07:44.541 sys 0m0.427s 00:07:44.541 01:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.541 01:29:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.541 ************************************ 00:07:44.541 END TEST default_locks_via_rpc 00:07:44.541 ************************************ 00:07:44.800 01:29:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:44.800 01:29:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.800 01:29:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.800 01:29:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.800 ************************************ 00:07:44.800 START TEST non_locking_app_on_locked_coremask 00:07:44.800 ************************************ 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=73720 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 73720 /var/tmp/spdk.sock 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73720 ']' 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.800 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.800 [2024-12-16 01:29:15.318330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:44.800 [2024-12-16 01:29:15.318457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73720 ] 00:07:45.059 [2024-12-16 01:29:15.468209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.059 [2024-12-16 01:29:15.489034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.059 [2024-12-16 01:29:15.525907] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.059 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.059 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:45.059 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=73723 00:07:45.059 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 73723 /var/tmp/spdk2.sock 00:07:45.059 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:45.059 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73723 ']' 00:07:45.059 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.060 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.060 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.060 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.060 01:29:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:45.060 [2024-12-16 01:29:15.713001] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:45.060 [2024-12-16 01:29:15.713129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73723 ] 00:07:45.319 [2024-12-16 01:29:15.876829] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:45.319 [2024-12-16 01:29:15.876908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.319 [2024-12-16 01:29:15.915476] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.619 [2024-12-16 01:29:15.989005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.619 01:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.619 01:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:45.619 01:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 73720 00:07:45.619 01:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73720 00:07:45.619 01:29:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 73720 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73720 ']' 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 73720 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73720 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.578 killing process with pid 73720 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73720' 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 73720 00:07:46.578 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 73720 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 73723 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73723 ']' 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 73723 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73723 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.147 killing process with pid 73723 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73723' 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 73723 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 73723 00:07:47.147 00:07:47.147 real 0m2.535s 00:07:47.147 user 0m2.886s 00:07:47.147 sys 0m0.844s 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.147 01:29:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.147 ************************************ 00:07:47.147 END TEST non_locking_app_on_locked_coremask 00:07:47.147 ************************************ 00:07:47.407 01:29:17 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:47.407 01:29:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.407 01:29:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.407 01:29:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:47.407 ************************************ 00:07:47.407 START TEST locking_app_on_unlocked_coremask 00:07:47.407 ************************************ 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=73778 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 73778 /var/tmp/spdk.sock 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73778 ']' 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.407 01:29:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.407 [2024-12-16 01:29:17.895291] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:47.407 [2024-12-16 01:29:17.895395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73778 ] 00:07:47.407 [2024-12-16 01:29:18.035452] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.407 [2024-12-16 01:29:18.035546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.407 [2024-12-16 01:29:18.059063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.666 [2024-12-16 01:29:18.098431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=73792 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 73792 /var/tmp/spdk2.sock 00:07:47.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73792 ']' 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.666 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.666 [2024-12-16 01:29:18.290622] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:47.666 [2024-12-16 01:29:18.290912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73792 ] 00:07:47.925 [2024-12-16 01:29:18.457350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.925 [2024-12-16 01:29:18.504388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.184 [2024-12-16 01:29:18.586293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.184 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.184 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:48.184 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 73792 00:07:48.184 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73792 00:07:48.184 01:29:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 73778 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73778 ']' 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 73778 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73778 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.121 killing process with pid 73778 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73778' 00:07:49.121 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 73778 00:07:49.122 01:29:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 73778 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 73792 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73792 ']' 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 73792 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73792 00:07:49.689 killing process with pid 73792 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73792' 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 73792 00:07:49.689 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 73792 00:07:49.949 ************************************ 00:07:49.949 END TEST locking_app_on_unlocked_coremask 00:07:49.949 ************************************ 00:07:49.949 00:07:49.949 real 0m2.531s 00:07:49.949 user 0m2.849s 00:07:49.949 sys 0m0.880s 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.949 01:29:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:49.949 01:29:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.949 01:29:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.949 01:29:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.949 ************************************ 00:07:49.949 START TEST locking_app_on_locked_coremask 00:07:49.949 ************************************ 00:07:49.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=73846 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 73846 /var/tmp/spdk.sock 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73846 ']' 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.949 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.949 [2024-12-16 01:29:20.480627] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:49.949 [2024-12-16 01:29:20.480771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73846 ] 00:07:50.208 [2024-12-16 01:29:20.618871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.208 [2024-12-16 01:29:20.642029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.208 [2024-12-16 01:29:20.678351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=73849 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 73849 /var/tmp/spdk2.sock 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73849 /var/tmp/spdk2.sock 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73849 /var/tmp/spdk2.sock 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 73849 ']' 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.208 01:29:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:50.208 [2024-12-16 01:29:20.858407] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:50.208 [2024-12-16 01:29:20.858714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73849 ] 00:07:50.467 [2024-12-16 01:29:21.015207] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 73846 has claimed it. 00:07:50.467 [2024-12-16 01:29:21.015266] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:51.035 ERROR: process (pid: 73849) is no longer running 00:07:51.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73849) - No such process 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 73846 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 73846 00:07:51.035 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 73846 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 73846 ']' 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 73846 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73846 00:07:51.294 killing process with pid 73846 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73846' 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 73846 00:07:51.294 01:29:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 73846 00:07:51.553 ************************************ 00:07:51.553 END TEST locking_app_on_locked_coremask 00:07:51.553 ************************************ 00:07:51.553 00:07:51.553 real 0m1.746s 00:07:51.553 user 0m2.078s 00:07:51.553 sys 0m0.456s 00:07:51.553 01:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.553 01:29:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.553 01:29:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:51.553 01:29:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.553 01:29:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.553 01:29:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:51.813 ************************************ 00:07:51.813 START TEST locking_overlapped_coremask 00:07:51.813 ************************************ 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=73899 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 73899 /var/tmp/spdk.sock 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73899 ']' 00:07:51.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.813 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:51.813 [2024-12-16 01:29:22.272119] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:51.813 [2024-12-16 01:29:22.272207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73899 ] 00:07:51.813 [2024-12-16 01:29:22.412841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.813 [2024-12-16 01:29:22.435443] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.813 [2024-12-16 01:29:22.435572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.813 [2024-12-16 01:29:22.435572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.071 [2024-12-16 01:29:22.472416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=73905 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 73905 /var/tmp/spdk2.sock 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 73905 /var/tmp/spdk2.sock 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 73905 /var/tmp/spdk2.sock 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 73905 ']' 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:52.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.071 01:29:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:52.071 [2024-12-16 01:29:22.670864] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:52.071 [2024-12-16 01:29:22.671169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73905 ] 00:07:52.330 [2024-12-16 01:29:22.837796] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73899 has claimed it. 00:07:52.330 [2024-12-16 01:29:22.837899] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:52.897 ERROR: process (pid: 73905) is no longer running 00:07:52.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (73905) - No such process 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 73899 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 73899 ']' 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 73899 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73899 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73899' 00:07:52.897 killing process with pid 73899 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 73899 00:07:52.897 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 73899 00:07:53.156 00:07:53.156 real 0m1.489s 00:07:53.156 user 0m4.255s 00:07:53.156 sys 0m0.301s 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:53.156 ************************************ 00:07:53.156 END TEST locking_overlapped_coremask 00:07:53.156 ************************************ 00:07:53.156 01:29:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:53.156 01:29:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.156 01:29:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.156 01:29:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:53.156 ************************************ 00:07:53.156 START TEST locking_overlapped_coremask_via_rpc 00:07:53.156 ************************************ 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:53.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=73945 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 73945 /var/tmp/spdk.sock 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73945 ']' 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.156 01:29:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.415 [2024-12-16 01:29:23.835879] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:53.415 [2024-12-16 01:29:23.835989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73945 ] 00:07:53.415 [2024-12-16 01:29:23.985000] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:53.415 [2024-12-16 01:29:23.985044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.415 [2024-12-16 01:29:24.007709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.415 [2024-12-16 01:29:24.007871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.415 [2024-12-16 01:29:24.007875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.415 [2024-12-16 01:29:24.049784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:54.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=73963 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 73963 /var/tmp/spdk2.sock 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73963 ']' 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.349 01:29:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.349 [2024-12-16 01:29:24.876442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:54.349 [2024-12-16 01:29:24.876600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73963 ] 00:07:54.607 [2024-12-16 01:29:25.043571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:54.607 [2024-12-16 01:29:25.043941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.607 [2024-12-16 01:29:25.094022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.607 [2024-12-16 01:29:25.097661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.607 [2024-12-16 01:29:25.097661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.607 [2024-12-16 01:29:25.178525] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.878 [2024-12-16 01:29:25.414755] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 73945 has claimed it. 00:07:54.878 request: 00:07:54.878 { 00:07:54.878 "method": "framework_enable_cpumask_locks", 00:07:54.878 "req_id": 1 00:07:54.878 } 00:07:54.878 Got JSON-RPC error response 00:07:54.878 response: 00:07:54.878 { 00:07:54.878 "code": -32603, 00:07:54.878 "message": "Failed to claim CPU core: 2" 00:07:54.878 } 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 73945 /var/tmp/spdk.sock 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73945 ']' 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.878 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 73963 /var/tmp/spdk2.sock 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 73963 ']' 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:55.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.148 01:29:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.406 ************************************ 00:07:55.406 END TEST locking_overlapped_coremask_via_rpc 00:07:55.406 ************************************ 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:55.406 00:07:55.406 real 0m2.287s 00:07:55.406 user 0m1.312s 00:07:55.406 sys 0m0.154s 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.406 01:29:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:55.665 01:29:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:55.665 01:29:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73945 ]] 00:07:55.665 01:29:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73945 00:07:55.665 01:29:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73945 ']' 00:07:55.665 01:29:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73945 00:07:55.665 01:29:26 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:55.666 01:29:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.666 01:29:26 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73945 00:07:55.666 killing process with pid 73945 00:07:55.666 01:29:26 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.666 01:29:26 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.666 01:29:26 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73945' 00:07:55.666 01:29:26 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73945 00:07:55.666 01:29:26 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73945 00:07:55.924 01:29:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73963 ]] 00:07:55.924 01:29:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73963 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73963 ']' 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73963 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73963 00:07:55.925 killing process with pid 73963 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73963' 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 73963 00:07:55.925 01:29:26 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 73963 00:07:56.184 01:29:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.184 Process with pid 73945 is not found 00:07:56.184 Process with pid 73963 is not found 00:07:56.184 01:29:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:56.184 01:29:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 73945 ]] 00:07:56.184 01:29:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 73945 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73945 ']' 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73945 00:07:56.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73945) - No such process 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73945 is not found' 00:07:56.184 01:29:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 73963 ]] 00:07:56.184 01:29:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 73963 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 73963 ']' 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 73963 00:07:56.184 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73963) - No such process 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 73963 is not found' 00:07:56.184 01:29:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.184 00:07:56.184 real 0m13.885s 00:07:56.184 user 0m25.915s 00:07:56.184 sys 0m4.171s 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.184 01:29:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.184 ************************************ 00:07:56.184 END TEST cpu_locks 00:07:56.184 ************************************ 00:07:56.184 00:07:56.184 real 0m39.540s 00:07:56.184 user 1m18.731s 00:07:56.184 sys 0m7.509s 00:07:56.184 01:29:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.184 01:29:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.184 ************************************ 00:07:56.184 END TEST event 00:07:56.184 ************************************ 00:07:56.184 01:29:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:56.184 01:29:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.184 01:29:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.184 01:29:26 -- common/autotest_common.sh@10 -- # set +x 00:07:56.184 ************************************ 00:07:56.184 START TEST thread 00:07:56.184 ************************************ 00:07:56.184 01:29:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:56.184 * Looking for test storage... 00:07:56.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:56.184 01:29:26 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.184 01:29:26 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.184 01:29:26 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.444 01:29:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.444 01:29:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.444 01:29:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.444 01:29:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.444 01:29:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.444 01:29:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.444 01:29:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.444 01:29:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.444 01:29:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.444 01:29:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.444 01:29:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.444 01:29:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:56.444 01:29:26 thread -- scripts/common.sh@345 -- # : 1 00:07:56.444 01:29:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.444 01:29:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.444 01:29:26 thread -- scripts/common.sh@365 -- # decimal 1 00:07:56.444 01:29:26 thread -- scripts/common.sh@353 -- # local d=1 00:07:56.444 01:29:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.444 01:29:26 thread -- scripts/common.sh@355 -- # echo 1 00:07:56.444 01:29:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.444 01:29:26 thread -- scripts/common.sh@366 -- # decimal 2 00:07:56.444 01:29:26 thread -- scripts/common.sh@353 -- # local d=2 00:07:56.444 01:29:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.444 01:29:26 thread -- scripts/common.sh@355 -- # echo 2 00:07:56.444 01:29:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.444 01:29:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.444 01:29:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.444 01:29:26 thread -- scripts/common.sh@368 -- # return 0 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.444 --rc genhtml_branch_coverage=1 00:07:56.444 --rc genhtml_function_coverage=1 00:07:56.444 --rc genhtml_legend=1 00:07:56.444 --rc geninfo_all_blocks=1 00:07:56.444 --rc geninfo_unexecuted_blocks=1 00:07:56.444 00:07:56.444 ' 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.444 --rc genhtml_branch_coverage=1 00:07:56.444 --rc genhtml_function_coverage=1 00:07:56.444 --rc genhtml_legend=1 00:07:56.444 --rc geninfo_all_blocks=1 00:07:56.444 --rc geninfo_unexecuted_blocks=1 00:07:56.444 00:07:56.444 ' 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.444 --rc genhtml_branch_coverage=1 00:07:56.444 --rc genhtml_function_coverage=1 00:07:56.444 --rc genhtml_legend=1 00:07:56.444 --rc geninfo_all_blocks=1 00:07:56.444 --rc geninfo_unexecuted_blocks=1 00:07:56.444 00:07:56.444 ' 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.444 --rc genhtml_branch_coverage=1 00:07:56.444 --rc genhtml_function_coverage=1 00:07:56.444 --rc genhtml_legend=1 00:07:56.444 --rc geninfo_all_blocks=1 00:07:56.444 --rc geninfo_unexecuted_blocks=1 00:07:56.444 00:07:56.444 ' 00:07:56.444 01:29:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.444 01:29:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.444 ************************************ 00:07:56.444 START TEST thread_poller_perf 00:07:56.444 ************************************ 00:07:56.444 01:29:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.444 [2024-12-16 01:29:26.917800] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:56.444 [2024-12-16 01:29:26.918098] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74086 ] 00:07:56.444 [2024-12-16 01:29:27.062425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.444 [2024-12-16 01:29:27.081051] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.444 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:57.820 [2024-12-16T01:29:28.478Z] ====================================== 00:07:57.820 [2024-12-16T01:29:28.478Z] busy:2210212757 (cyc) 00:07:57.820 [2024-12-16T01:29:28.478Z] total_run_count: 369000 00:07:57.820 [2024-12-16T01:29:28.478Z] tsc_hz: 2200000000 (cyc) 00:07:57.820 [2024-12-16T01:29:28.478Z] ====================================== 00:07:57.820 [2024-12-16T01:29:28.478Z] poller_cost: 5989 (cyc), 2722 (nsec) 00:07:57.820 00:07:57.820 real 0m1.224s 00:07:57.820 user 0m1.081s 00:07:57.820 sys 0m0.035s 00:07:57.820 01:29:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.820 01:29:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:57.820 ************************************ 00:07:57.820 END TEST thread_poller_perf 00:07:57.820 ************************************ 00:07:57.820 01:29:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.820 01:29:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:57.820 01:29:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.820 01:29:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:57.820 ************************************ 00:07:57.820 START TEST thread_poller_perf 00:07:57.820 ************************************ 00:07:57.820 01:29:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:57.820 [2024-12-16 01:29:28.192828] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:57.820 [2024-12-16 01:29:28.192917] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74122 ] 00:07:57.820 [2024-12-16 01:29:28.336236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.820 [2024-12-16 01:29:28.354393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.820 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:58.756 [2024-12-16T01:29:29.414Z] ====================================== 00:07:58.756 [2024-12-16T01:29:29.414Z] busy:2201737798 (cyc) 00:07:58.756 [2024-12-16T01:29:29.414Z] total_run_count: 4251000 00:07:58.756 [2024-12-16T01:29:29.414Z] tsc_hz: 2200000000 (cyc) 00:07:58.756 [2024-12-16T01:29:29.414Z] ====================================== 00:07:58.756 [2024-12-16T01:29:29.414Z] poller_cost: 517 (cyc), 235 (nsec) 00:07:58.756 ************************************ 00:07:58.756 END TEST thread_poller_perf 00:07:58.756 ************************************ 00:07:58.756 00:07:58.756 real 0m1.209s 00:07:58.756 user 0m1.072s 00:07:58.756 sys 0m0.031s 00:07:58.756 01:29:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.756 01:29:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.015 01:29:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:59.015 ************************************ 00:07:59.015 END TEST thread 00:07:59.015 ************************************ 00:07:59.015 00:07:59.015 real 0m2.720s 00:07:59.015 user 0m2.309s 00:07:59.015 sys 0m0.191s 00:07:59.015 01:29:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.015 01:29:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.015 01:29:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:59.015 01:29:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.015 01:29:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.015 01:29:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.015 01:29:29 -- common/autotest_common.sh@10 -- # set +x 00:07:59.015 ************************************ 00:07:59.015 START TEST app_cmdline 00:07:59.015 ************************************ 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.015 * Looking for test storage... 00:07:59.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.015 01:29:29 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.015 --rc genhtml_branch_coverage=1 00:07:59.015 --rc genhtml_function_coverage=1 00:07:59.015 --rc genhtml_legend=1 00:07:59.015 --rc geninfo_all_blocks=1 00:07:59.015 --rc geninfo_unexecuted_blocks=1 00:07:59.015 00:07:59.015 ' 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.015 --rc genhtml_branch_coverage=1 00:07:59.015 --rc genhtml_function_coverage=1 00:07:59.015 --rc genhtml_legend=1 00:07:59.015 --rc geninfo_all_blocks=1 00:07:59.015 --rc geninfo_unexecuted_blocks=1 00:07:59.015 00:07:59.015 ' 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.015 --rc genhtml_branch_coverage=1 00:07:59.015 --rc genhtml_function_coverage=1 00:07:59.015 --rc genhtml_legend=1 00:07:59.015 --rc geninfo_all_blocks=1 00:07:59.015 --rc geninfo_unexecuted_blocks=1 00:07:59.015 00:07:59.015 ' 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.015 --rc genhtml_branch_coverage=1 00:07:59.015 --rc genhtml_function_coverage=1 00:07:59.015 --rc genhtml_legend=1 00:07:59.015 --rc geninfo_all_blocks=1 00:07:59.015 --rc geninfo_unexecuted_blocks=1 00:07:59.015 00:07:59.015 ' 00:07:59.015 01:29:29 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.015 01:29:29 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=74204 00:07:59.015 01:29:29 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.015 01:29:29 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 74204 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 74204 ']' 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.015 01:29:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.274 [2024-12-16 01:29:29.736743] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:59.274 [2024-12-16 01:29:29.736854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74204 ] 00:07:59.274 [2024-12-16 01:29:29.875280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.274 [2024-12-16 01:29:29.894401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.274 [2024-12-16 01:29:29.929174] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.533 01:29:30 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.533 01:29:30 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:59.533 01:29:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:59.792 { 00:07:59.792 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:07:59.792 "fields": { 00:07:59.792 "major": 25, 00:07:59.792 "minor": 1, 00:07:59.792 "patch": 0, 00:07:59.792 "suffix": "-pre", 00:07:59.792 "commit": "e01cb43b8" 00:07:59.792 } 00:07:59.792 } 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:59.792 01:29:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:59.792 01:29:30 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.051 request: 00:08:00.051 { 00:08:00.051 "method": "env_dpdk_get_mem_stats", 00:08:00.051 "req_id": 1 00:08:00.051 } 00:08:00.051 Got JSON-RPC error response 00:08:00.051 response: 00:08:00.051 { 00:08:00.051 "code": -32601, 00:08:00.051 "message": "Method not found" 00:08:00.051 } 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.051 01:29:30 app_cmdline -- app/cmdline.sh@1 -- # killprocess 74204 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 74204 ']' 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 74204 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74204 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.051 killing process with pid 74204 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74204' 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@973 -- # kill 74204 00:08:00.051 01:29:30 app_cmdline -- common/autotest_common.sh@978 -- # wait 74204 00:08:00.310 00:08:00.310 real 0m1.416s 00:08:00.310 user 0m1.827s 00:08:00.310 sys 0m0.374s 00:08:00.310 01:29:30 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.310 01:29:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.310 ************************************ 00:08:00.310 END TEST app_cmdline 00:08:00.310 ************************************ 00:08:00.310 01:29:30 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:00.310 01:29:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.310 01:29:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.310 01:29:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.310 ************************************ 00:08:00.310 START TEST version 00:08:00.310 ************************************ 00:08:00.310 01:29:30 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:00.569 * Looking for test storage... 00:08:00.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:00.569 01:29:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.569 01:29:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.569 01:29:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.569 01:29:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.569 01:29:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.569 01:29:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.569 01:29:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.569 01:29:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.569 01:29:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.569 01:29:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.569 01:29:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.569 01:29:31 version -- scripts/common.sh@344 -- # case "$op" in 00:08:00.569 01:29:31 version -- scripts/common.sh@345 -- # : 1 00:08:00.569 01:29:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.569 01:29:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.569 01:29:31 version -- scripts/common.sh@365 -- # decimal 1 00:08:00.569 01:29:31 version -- scripts/common.sh@353 -- # local d=1 00:08:00.569 01:29:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.569 01:29:31 version -- scripts/common.sh@355 -- # echo 1 00:08:00.569 01:29:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.569 01:29:31 version -- scripts/common.sh@366 -- # decimal 2 00:08:00.569 01:29:31 version -- scripts/common.sh@353 -- # local d=2 00:08:00.569 01:29:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.569 01:29:31 version -- scripts/common.sh@355 -- # echo 2 00:08:00.569 01:29:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.569 01:29:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.569 01:29:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.569 01:29:31 version -- scripts/common.sh@368 -- # return 0 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.569 --rc genhtml_branch_coverage=1 00:08:00.569 --rc genhtml_function_coverage=1 00:08:00.569 --rc genhtml_legend=1 00:08:00.569 --rc geninfo_all_blocks=1 00:08:00.569 --rc geninfo_unexecuted_blocks=1 00:08:00.569 00:08:00.569 ' 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.569 --rc genhtml_branch_coverage=1 00:08:00.569 --rc genhtml_function_coverage=1 00:08:00.569 --rc genhtml_legend=1 00:08:00.569 --rc geninfo_all_blocks=1 00:08:00.569 --rc geninfo_unexecuted_blocks=1 00:08:00.569 00:08:00.569 ' 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.569 --rc genhtml_branch_coverage=1 00:08:00.569 --rc genhtml_function_coverage=1 00:08:00.569 --rc genhtml_legend=1 00:08:00.569 --rc geninfo_all_blocks=1 00:08:00.569 --rc geninfo_unexecuted_blocks=1 00:08:00.569 00:08:00.569 ' 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.569 --rc genhtml_branch_coverage=1 00:08:00.569 --rc genhtml_function_coverage=1 00:08:00.569 --rc genhtml_legend=1 00:08:00.569 --rc geninfo_all_blocks=1 00:08:00.569 --rc geninfo_unexecuted_blocks=1 00:08:00.569 00:08:00.569 ' 00:08:00.569 01:29:31 version -- app/version.sh@17 -- # get_header_version major 00:08:00.569 01:29:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # cut -f2 00:08:00.569 01:29:31 version -- app/version.sh@17 -- # major=25 00:08:00.569 01:29:31 version -- app/version.sh@18 -- # get_header_version minor 00:08:00.569 01:29:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # cut -f2 00:08:00.569 01:29:31 version -- app/version.sh@18 -- # minor=1 00:08:00.569 01:29:31 version -- app/version.sh@19 -- # get_header_version patch 00:08:00.569 01:29:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # cut -f2 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.569 01:29:31 version -- app/version.sh@19 -- # patch=0 00:08:00.569 01:29:31 version -- app/version.sh@20 -- # get_header_version suffix 00:08:00.569 01:29:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # cut -f2 00:08:00.569 01:29:31 version -- app/version.sh@14 -- # tr -d '"' 00:08:00.569 01:29:31 version -- app/version.sh@20 -- # suffix=-pre 00:08:00.569 01:29:31 version -- app/version.sh@22 -- # version=25.1 00:08:00.569 01:29:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.569 01:29:31 version -- app/version.sh@28 -- # version=25.1rc0 00:08:00.569 01:29:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:00.569 01:29:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.569 01:29:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:00.569 01:29:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:00.569 00:08:00.569 real 0m0.264s 00:08:00.569 user 0m0.177s 00:08:00.569 sys 0m0.126s 00:08:00.569 01:29:31 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.569 ************************************ 00:08:00.569 01:29:31 version -- common/autotest_common.sh@10 -- # set +x 00:08:00.569 END TEST version 00:08:00.569 ************************************ 00:08:00.830 01:29:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:00.830 01:29:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:00.830 01:29:31 -- spdk/autotest.sh@194 -- # uname -s 00:08:00.830 01:29:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:00.830 01:29:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:00.830 01:29:31 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:08:00.830 01:29:31 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:08:00.830 01:29:31 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:00.830 01:29:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.830 01:29:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.830 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:00.830 ************************************ 00:08:00.830 START TEST spdk_dd 00:08:00.830 ************************************ 00:08:00.830 01:29:31 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:00.830 * Looking for test storage... 00:08:00.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:00.830 01:29:31 spdk_dd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:00.830 01:29:31 spdk_dd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:00.830 01:29:31 spdk_dd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:00.830 01:29:31 spdk_dd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@345 -- # : 1 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.830 01:29:31 spdk_dd -- scripts/common.sh@368 -- # return 0 00:08:00.830 01:29:31 spdk_dd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.830 01:29:31 spdk_dd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:00.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.830 --rc genhtml_branch_coverage=1 00:08:00.830 --rc genhtml_function_coverage=1 00:08:00.830 --rc genhtml_legend=1 00:08:00.830 --rc geninfo_all_blocks=1 00:08:00.831 --rc geninfo_unexecuted_blocks=1 00:08:00.831 00:08:00.831 ' 00:08:00.831 01:29:31 spdk_dd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.831 --rc genhtml_branch_coverage=1 00:08:00.831 --rc genhtml_function_coverage=1 00:08:00.831 --rc genhtml_legend=1 00:08:00.831 --rc geninfo_all_blocks=1 00:08:00.831 --rc geninfo_unexecuted_blocks=1 00:08:00.831 00:08:00.831 ' 00:08:00.831 01:29:31 spdk_dd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.831 --rc genhtml_branch_coverage=1 00:08:00.831 --rc genhtml_function_coverage=1 00:08:00.831 --rc genhtml_legend=1 00:08:00.831 --rc geninfo_all_blocks=1 00:08:00.831 --rc geninfo_unexecuted_blocks=1 00:08:00.831 00:08:00.831 ' 00:08:00.831 01:29:31 spdk_dd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:00.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.831 --rc genhtml_branch_coverage=1 00:08:00.831 --rc genhtml_function_coverage=1 00:08:00.831 --rc genhtml_legend=1 00:08:00.831 --rc geninfo_all_blocks=1 00:08:00.831 --rc geninfo_unexecuted_blocks=1 00:08:00.831 00:08:00.831 ' 00:08:00.831 01:29:31 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:00.831 01:29:31 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:08:00.831 01:29:31 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.831 01:29:31 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.831 01:29:31 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.831 01:29:31 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.831 01:29:31 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.831 01:29:31 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.831 01:29:31 spdk_dd -- paths/export.sh@5 -- # export PATH 00:08:00.831 01:29:31 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.831 01:29:31 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:01.400 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:01.400 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:01.400 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:01.400 01:29:31 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:01.400 01:29:31 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@233 -- # local class 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@235 -- # local progif 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@236 -- # class=01 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@18 -- # local i 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@27 -- # return 0 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:08:01.400 01:29:31 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:01.400 01:29:31 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@139 -- # local lib 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.11.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.12.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.11.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.12.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.400 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:01.401 * spdk_dd linked to liburing 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:01.401 01:29:31 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:01.401 01:29:31 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:01.401 01:29:31 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:01.401 01:29:31 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:01.401 01:29:31 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:01.401 01:29:31 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:01.401 01:29:31 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:01.402 01:29:31 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:08:01.402 01:29:31 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:08:01.402 01:29:31 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:08:01.402 01:29:31 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:08:01.402 01:29:31 spdk_dd -- dd/common.sh@153 -- # return 0 00:08:01.402 01:29:31 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:01.402 01:29:31 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:01.402 01:29:31 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:01.402 01:29:31 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.402 01:29:31 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:01.402 ************************************ 00:08:01.402 START TEST spdk_dd_basic_rw 00:08:01.402 ************************************ 00:08:01.402 01:29:31 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:08:01.402 * Looking for test storage... 00:08:01.402 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:01.402 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:01.402 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lcov --version 00:08:01.402 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:01.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.661 --rc genhtml_branch_coverage=1 00:08:01.661 --rc genhtml_function_coverage=1 00:08:01.661 --rc genhtml_legend=1 00:08:01.661 --rc geninfo_all_blocks=1 00:08:01.661 --rc geninfo_unexecuted_blocks=1 00:08:01.661 00:08:01.661 ' 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:01.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.661 --rc genhtml_branch_coverage=1 00:08:01.661 --rc genhtml_function_coverage=1 00:08:01.661 --rc genhtml_legend=1 00:08:01.661 --rc geninfo_all_blocks=1 00:08:01.661 --rc geninfo_unexecuted_blocks=1 00:08:01.661 00:08:01.661 ' 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:01.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.661 --rc genhtml_branch_coverage=1 00:08:01.661 --rc genhtml_function_coverage=1 00:08:01.661 --rc genhtml_legend=1 00:08:01.661 --rc geninfo_all_blocks=1 00:08:01.661 --rc geninfo_unexecuted_blocks=1 00:08:01.661 00:08:01.661 ' 00:08:01.661 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:01.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.661 --rc genhtml_branch_coverage=1 00:08:01.661 --rc genhtml_function_coverage=1 00:08:01.662 --rc genhtml_legend=1 00:08:01.662 --rc geninfo_all_blocks=1 00:08:01.662 --rc geninfo_unexecuted_blocks=1 00:08:01.662 00:08:01.662 ' 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:08:01.662 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:08:01.923 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:01.923 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:01.924 ************************************ 00:08:01.924 START TEST dd_bs_lt_native_bs 00:08:01.924 ************************************ 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:01.924 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:01.924 { 00:08:01.924 "subsystems": [ 00:08:01.924 { 00:08:01.924 "subsystem": "bdev", 00:08:01.924 "config": [ 00:08:01.924 { 00:08:01.924 "params": { 00:08:01.924 "trtype": "pcie", 00:08:01.924 "traddr": "0000:00:10.0", 00:08:01.924 "name": "Nvme0" 00:08:01.924 }, 00:08:01.924 "method": "bdev_nvme_attach_controller" 00:08:01.924 }, 00:08:01.924 { 00:08:01.924 "method": "bdev_wait_for_examine" 00:08:01.924 } 00:08:01.924 ] 00:08:01.924 } 00:08:01.924 ] 00:08:01.924 } 00:08:01.924 [2024-12-16 01:29:32.397320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:01.924 [2024-12-16 01:29:32.397423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74543 ] 00:08:01.924 [2024-12-16 01:29:32.549859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.924 [2024-12-16 01:29:32.573728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.183 [2024-12-16 01:29:32.608861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.183 [2024-12-16 01:29:32.702831] spdk_dd.c:1159:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:02.183 [2024-12-16 01:29:32.702902] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:02.183 [2024-12-16 01:29:32.777589] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:02.183 00:08:02.183 real 0m0.493s 00:08:02.183 user 0m0.324s 00:08:02.183 sys 0m0.121s 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.183 ************************************ 00:08:02.183 END TEST dd_bs_lt_native_bs 00:08:02.183 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:08:02.183 ************************************ 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.442 ************************************ 00:08:02.442 START TEST dd_rw 00:08:02.442 ************************************ 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:02.442 01:29:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.009 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:03.009 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:03.009 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.009 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.009 [2024-12-16 01:29:33.527151] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:03.009 [2024-12-16 01:29:33.527267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74574 ] 00:08:03.009 { 00:08:03.009 "subsystems": [ 00:08:03.009 { 00:08:03.009 "subsystem": "bdev", 00:08:03.009 "config": [ 00:08:03.009 { 00:08:03.009 "params": { 00:08:03.009 "trtype": "pcie", 00:08:03.009 "traddr": "0000:00:10.0", 00:08:03.009 "name": "Nvme0" 00:08:03.009 }, 00:08:03.009 "method": "bdev_nvme_attach_controller" 00:08:03.009 }, 00:08:03.009 { 00:08:03.009 "method": "bdev_wait_for_examine" 00:08:03.009 } 00:08:03.009 ] 00:08:03.009 } 00:08:03.009 ] 00:08:03.009 } 00:08:03.267 [2024-12-16 01:29:33.675759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.267 [2024-12-16 01:29:33.700548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.267 [2024-12-16 01:29:33.736013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.267  [2024-12-16T01:29:34.184Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:03.526 00:08:03.526 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:03.526 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:03.526 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.526 01:29:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.526 { 00:08:03.526 "subsystems": [ 00:08:03.526 { 00:08:03.526 "subsystem": "bdev", 00:08:03.526 "config": [ 00:08:03.526 { 00:08:03.526 "params": { 00:08:03.526 "trtype": "pcie", 00:08:03.526 "traddr": "0000:00:10.0", 00:08:03.526 "name": "Nvme0" 00:08:03.526 }, 00:08:03.526 "method": "bdev_nvme_attach_controller" 00:08:03.526 }, 00:08:03.526 { 00:08:03.526 "method": "bdev_wait_for_examine" 00:08:03.526 } 00:08:03.526 ] 00:08:03.526 } 00:08:03.526 ] 00:08:03.526 } 00:08:03.526 [2024-12-16 01:29:33.996964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:03.526 [2024-12-16 01:29:33.997054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74587 ] 00:08:03.526 [2024-12-16 01:29:34.140944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.526 [2024-12-16 01:29:34.159092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.785 [2024-12-16 01:29:34.187917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.785  [2024-12-16T01:29:34.443Z] Copying: 60/60 [kB] (average 19 MBps) 00:08:03.785 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:03.785 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:03.785 [2024-12-16 01:29:34.440203] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:03.785 [2024-12-16 01:29:34.440273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74603 ] 00:08:04.044 { 00:08:04.044 "subsystems": [ 00:08:04.044 { 00:08:04.044 "subsystem": "bdev", 00:08:04.044 "config": [ 00:08:04.044 { 00:08:04.044 "params": { 00:08:04.044 "trtype": "pcie", 00:08:04.044 "traddr": "0000:00:10.0", 00:08:04.044 "name": "Nvme0" 00:08:04.044 }, 00:08:04.044 "method": "bdev_nvme_attach_controller" 00:08:04.044 }, 00:08:04.044 { 00:08:04.044 "method": "bdev_wait_for_examine" 00:08:04.044 } 00:08:04.044 ] 00:08:04.044 } 00:08:04.044 ] 00:08:04.044 } 00:08:04.044 [2024-12-16 01:29:34.576697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.044 [2024-12-16 01:29:34.595128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.044 [2024-12-16 01:29:34.622486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.301  [2024-12-16T01:29:34.959Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.301 00:08:04.301 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:04.301 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:08:04.301 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:08:04.301 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:08:04.301 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:04.301 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:04.301 01:29:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.867 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:04.867 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.867 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.867 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.867 [2024-12-16 01:29:35.463563] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:04.867 [2024-12-16 01:29:35.463675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74622 ] 00:08:04.867 { 00:08:04.867 "subsystems": [ 00:08:04.867 { 00:08:04.867 "subsystem": "bdev", 00:08:04.867 "config": [ 00:08:04.867 { 00:08:04.867 "params": { 00:08:04.867 "trtype": "pcie", 00:08:04.867 "traddr": "0000:00:10.0", 00:08:04.867 "name": "Nvme0" 00:08:04.867 }, 00:08:04.867 "method": "bdev_nvme_attach_controller" 00:08:04.867 }, 00:08:04.867 { 00:08:04.867 "method": "bdev_wait_for_examine" 00:08:04.867 } 00:08:04.867 ] 00:08:04.867 } 00:08:04.867 ] 00:08:04.867 } 00:08:05.125 [2024-12-16 01:29:35.610084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.125 [2024-12-16 01:29:35.628601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.125 [2024-12-16 01:29:35.656722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.126  [2024-12-16T01:29:36.042Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:05.384 00:08:05.384 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:05.384 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:05.384 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.384 01:29:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.384 { 00:08:05.384 "subsystems": [ 00:08:05.384 { 00:08:05.384 "subsystem": "bdev", 00:08:05.384 "config": [ 00:08:05.384 { 00:08:05.384 "params": { 00:08:05.384 "trtype": "pcie", 00:08:05.384 "traddr": "0000:00:10.0", 00:08:05.384 "name": "Nvme0" 00:08:05.384 }, 00:08:05.384 "method": "bdev_nvme_attach_controller" 00:08:05.384 }, 00:08:05.384 { 00:08:05.384 "method": "bdev_wait_for_examine" 00:08:05.384 } 00:08:05.384 ] 00:08:05.384 } 00:08:05.384 ] 00:08:05.384 } 00:08:05.384 [2024-12-16 01:29:35.924589] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:05.384 [2024-12-16 01:29:35.924699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74635 ] 00:08:05.643 [2024-12-16 01:29:36.069475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.643 [2024-12-16 01:29:36.087529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.643 [2024-12-16 01:29:36.115650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.643  [2024-12-16T01:29:36.560Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:05.902 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:05.902 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:05.902 { 00:08:05.902 "subsystems": [ 00:08:05.902 { 00:08:05.902 "subsystem": "bdev", 00:08:05.902 "config": [ 00:08:05.902 { 00:08:05.902 "params": { 00:08:05.902 "trtype": "pcie", 00:08:05.902 "traddr": "0000:00:10.0", 00:08:05.902 "name": "Nvme0" 00:08:05.902 }, 00:08:05.902 "method": "bdev_nvme_attach_controller" 00:08:05.902 }, 00:08:05.902 { 00:08:05.902 "method": "bdev_wait_for_examine" 00:08:05.902 } 00:08:05.902 ] 00:08:05.902 } 00:08:05.902 ] 00:08:05.902 } 00:08:05.902 [2024-12-16 01:29:36.386223] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:05.902 [2024-12-16 01:29:36.386855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74651 ] 00:08:05.902 [2024-12-16 01:29:36.532856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.902 [2024-12-16 01:29:36.551159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.161 [2024-12-16 01:29:36.580070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.161  [2024-12-16T01:29:36.819Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:06.161 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:06.161 01:29:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.728 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:06.728 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:06.728 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.728 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.728 [2024-12-16 01:29:37.316160] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:06.728 { 00:08:06.728 "subsystems": [ 00:08:06.728 { 00:08:06.728 "subsystem": "bdev", 00:08:06.728 "config": [ 00:08:06.728 { 00:08:06.728 "params": { 00:08:06.728 "trtype": "pcie", 00:08:06.728 "traddr": "0000:00:10.0", 00:08:06.728 "name": "Nvme0" 00:08:06.728 }, 00:08:06.728 "method": "bdev_nvme_attach_controller" 00:08:06.728 }, 00:08:06.728 { 00:08:06.728 "method": "bdev_wait_for_examine" 00:08:06.728 } 00:08:06.728 ] 00:08:06.728 } 00:08:06.728 ] 00:08:06.728 } 00:08:06.728 [2024-12-16 01:29:37.316270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74664 ] 00:08:06.987 [2024-12-16 01:29:37.462434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.987 [2024-12-16 01:29:37.481478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.987 [2024-12-16 01:29:37.509359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.987  [2024-12-16T01:29:37.906Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:07.248 00:08:07.248 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:07.248 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:07.248 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.248 01:29:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.248 [2024-12-16 01:29:37.774158] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:07.248 [2024-12-16 01:29:37.774262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74678 ] 00:08:07.248 { 00:08:07.248 "subsystems": [ 00:08:07.248 { 00:08:07.248 "subsystem": "bdev", 00:08:07.248 "config": [ 00:08:07.248 { 00:08:07.248 "params": { 00:08:07.248 "trtype": "pcie", 00:08:07.248 "traddr": "0000:00:10.0", 00:08:07.248 "name": "Nvme0" 00:08:07.248 }, 00:08:07.248 "method": "bdev_nvme_attach_controller" 00:08:07.248 }, 00:08:07.248 { 00:08:07.248 "method": "bdev_wait_for_examine" 00:08:07.248 } 00:08:07.248 ] 00:08:07.248 } 00:08:07.248 ] 00:08:07.248 } 00:08:07.508 [2024-12-16 01:29:37.919235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.508 [2024-12-16 01:29:37.937683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.508 [2024-12-16 01:29:37.965720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.508  [2024-12-16T01:29:38.166Z] Copying: 56/56 [kB] (average 27 MBps) 00:08:07.508 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.767 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:07.767 [2024-12-16 01:29:38.219035] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:07.767 [2024-12-16 01:29:38.219138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74693 ] 00:08:07.767 { 00:08:07.767 "subsystems": [ 00:08:07.767 { 00:08:07.767 "subsystem": "bdev", 00:08:07.767 "config": [ 00:08:07.767 { 00:08:07.767 "params": { 00:08:07.767 "trtype": "pcie", 00:08:07.767 "traddr": "0000:00:10.0", 00:08:07.767 "name": "Nvme0" 00:08:07.767 }, 00:08:07.767 "method": "bdev_nvme_attach_controller" 00:08:07.767 }, 00:08:07.767 { 00:08:07.767 "method": "bdev_wait_for_examine" 00:08:07.767 } 00:08:07.767 ] 00:08:07.767 } 00:08:07.767 ] 00:08:07.767 } 00:08:07.767 [2024-12-16 01:29:38.359065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.767 [2024-12-16 01:29:38.378492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.767 [2024-12-16 01:29:38.406127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.026  [2024-12-16T01:29:38.684Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:08.026 00:08:08.026 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:08.026 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:08:08.026 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:08:08.026 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:08:08.026 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:08.026 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:08.026 01:29:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.594 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:08.594 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:08.594 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:08.594 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.594 [2024-12-16 01:29:39.192778] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:08.594 [2024-12-16 01:29:39.192888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74712 ] 00:08:08.594 { 00:08:08.594 "subsystems": [ 00:08:08.594 { 00:08:08.594 "subsystem": "bdev", 00:08:08.594 "config": [ 00:08:08.594 { 00:08:08.594 "params": { 00:08:08.594 "trtype": "pcie", 00:08:08.594 "traddr": "0000:00:10.0", 00:08:08.594 "name": "Nvme0" 00:08:08.594 }, 00:08:08.594 "method": "bdev_nvme_attach_controller" 00:08:08.594 }, 00:08:08.594 { 00:08:08.594 "method": "bdev_wait_for_examine" 00:08:08.594 } 00:08:08.594 ] 00:08:08.594 } 00:08:08.594 ] 00:08:08.594 } 00:08:08.852 [2024-12-16 01:29:39.338765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.852 [2024-12-16 01:29:39.357493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.852 [2024-12-16 01:29:39.388067] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.852  [2024-12-16T01:29:39.769Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:09.111 00:08:09.111 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:09.111 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:09.111 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.111 01:29:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.111 { 00:08:09.111 "subsystems": [ 00:08:09.111 { 00:08:09.111 "subsystem": "bdev", 00:08:09.111 "config": [ 00:08:09.111 { 00:08:09.111 "params": { 00:08:09.111 "trtype": "pcie", 00:08:09.111 "traddr": "0000:00:10.0", 00:08:09.111 "name": "Nvme0" 00:08:09.111 }, 00:08:09.111 "method": "bdev_nvme_attach_controller" 00:08:09.111 }, 00:08:09.111 { 00:08:09.111 "method": "bdev_wait_for_examine" 00:08:09.111 } 00:08:09.111 ] 00:08:09.111 } 00:08:09.111 ] 00:08:09.111 } 00:08:09.111 [2024-12-16 01:29:39.653266] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:09.111 [2024-12-16 01:29:39.653411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74726 ] 00:08:09.370 [2024-12-16 01:29:39.798793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.370 [2024-12-16 01:29:39.817909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.370 [2024-12-16 01:29:39.845502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.370  [2024-12-16T01:29:40.287Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:09.629 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:09.629 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.629 { 00:08:09.629 "subsystems": [ 00:08:09.629 { 00:08:09.629 "subsystem": "bdev", 00:08:09.630 "config": [ 00:08:09.630 { 00:08:09.630 "params": { 00:08:09.630 "trtype": "pcie", 00:08:09.630 "traddr": "0000:00:10.0", 00:08:09.630 "name": "Nvme0" 00:08:09.630 }, 00:08:09.630 "method": "bdev_nvme_attach_controller" 00:08:09.630 }, 00:08:09.630 { 00:08:09.630 "method": "bdev_wait_for_examine" 00:08:09.630 } 00:08:09.630 ] 00:08:09.630 } 00:08:09.630 ] 00:08:09.630 } 00:08:09.630 [2024-12-16 01:29:40.117315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:09.630 [2024-12-16 01:29:40.117420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74741 ] 00:08:09.630 [2024-12-16 01:29:40.263463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.630 [2024-12-16 01:29:40.282104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.888 [2024-12-16 01:29:40.311011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.888  [2024-12-16T01:29:40.546Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:09.888 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:09.888 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.511 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:10.511 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:10.511 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:10.511 01:29:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:10.511 [2024-12-16 01:29:41.050510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:10.511 [2024-12-16 01:29:41.050619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74759 ] 00:08:10.511 { 00:08:10.511 "subsystems": [ 00:08:10.511 { 00:08:10.511 "subsystem": "bdev", 00:08:10.511 "config": [ 00:08:10.511 { 00:08:10.511 "params": { 00:08:10.511 "trtype": "pcie", 00:08:10.511 "traddr": "0000:00:10.0", 00:08:10.511 "name": "Nvme0" 00:08:10.511 }, 00:08:10.511 "method": "bdev_nvme_attach_controller" 00:08:10.511 }, 00:08:10.511 { 00:08:10.511 "method": "bdev_wait_for_examine" 00:08:10.511 } 00:08:10.511 ] 00:08:10.511 } 00:08:10.511 ] 00:08:10.511 } 00:08:10.787 [2024-12-16 01:29:41.197901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.787 [2024-12-16 01:29:41.216148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.787 [2024-12-16 01:29:41.244072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.787  [2024-12-16T01:29:41.445Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:10.787 00:08:11.046 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:11.046 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:11.046 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.046 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.046 { 00:08:11.046 "subsystems": [ 00:08:11.046 { 00:08:11.046 "subsystem": "bdev", 00:08:11.046 "config": [ 00:08:11.046 { 00:08:11.046 "params": { 00:08:11.046 "trtype": "pcie", 00:08:11.046 "traddr": "0000:00:10.0", 00:08:11.046 "name": "Nvme0" 00:08:11.046 }, 00:08:11.046 "method": "bdev_nvme_attach_controller" 00:08:11.046 }, 00:08:11.046 { 00:08:11.046 "method": "bdev_wait_for_examine" 00:08:11.046 } 00:08:11.046 ] 00:08:11.046 } 00:08:11.046 ] 00:08:11.046 } 00:08:11.046 [2024-12-16 01:29:41.504784] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:11.046 [2024-12-16 01:29:41.504888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74774 ] 00:08:11.046 [2024-12-16 01:29:41.650835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.046 [2024-12-16 01:29:41.669505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.046 [2024-12-16 01:29:41.697378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.304  [2024-12-16T01:29:41.962Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:11.304 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:11.304 01:29:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:11.563 [2024-12-16 01:29:41.966964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:11.563 [2024-12-16 01:29:41.967071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74784 ] 00:08:11.563 { 00:08:11.563 "subsystems": [ 00:08:11.563 { 00:08:11.563 "subsystem": "bdev", 00:08:11.563 "config": [ 00:08:11.563 { 00:08:11.563 "params": { 00:08:11.563 "trtype": "pcie", 00:08:11.563 "traddr": "0000:00:10.0", 00:08:11.563 "name": "Nvme0" 00:08:11.563 }, 00:08:11.563 "method": "bdev_nvme_attach_controller" 00:08:11.563 }, 00:08:11.563 { 00:08:11.563 "method": "bdev_wait_for_examine" 00:08:11.563 } 00:08:11.563 ] 00:08:11.563 } 00:08:11.563 ] 00:08:11.563 } 00:08:11.563 [2024-12-16 01:29:42.113524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.563 [2024-12-16 01:29:42.132018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.563 [2024-12-16 01:29:42.159894] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.822  [2024-12-16T01:29:42.480Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:11.822 00:08:11.822 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:11.822 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:11.822 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:11.822 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:11.822 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:11.822 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:11.822 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.389 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:12.389 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:12.389 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.389 01:29:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.389 [2024-12-16 01:29:42.880191] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:12.389 [2024-12-16 01:29:42.880777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74803 ] 00:08:12.389 { 00:08:12.389 "subsystems": [ 00:08:12.389 { 00:08:12.389 "subsystem": "bdev", 00:08:12.389 "config": [ 00:08:12.389 { 00:08:12.389 "params": { 00:08:12.389 "trtype": "pcie", 00:08:12.389 "traddr": "0000:00:10.0", 00:08:12.389 "name": "Nvme0" 00:08:12.389 }, 00:08:12.389 "method": "bdev_nvme_attach_controller" 00:08:12.389 }, 00:08:12.389 { 00:08:12.389 "method": "bdev_wait_for_examine" 00:08:12.389 } 00:08:12.389 ] 00:08:12.389 } 00:08:12.389 ] 00:08:12.389 } 00:08:12.389 [2024-12-16 01:29:43.026470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.389 [2024-12-16 01:29:43.045491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.648 [2024-12-16 01:29:43.075904] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.648  [2024-12-16T01:29:43.306Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:12.648 00:08:12.648 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:12.648 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:12.648 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:12.648 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:12.906 [2024-12-16 01:29:43.335753] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:12.907 [2024-12-16 01:29:43.335850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74816 ] 00:08:12.907 { 00:08:12.907 "subsystems": [ 00:08:12.907 { 00:08:12.907 "subsystem": "bdev", 00:08:12.907 "config": [ 00:08:12.907 { 00:08:12.907 "params": { 00:08:12.907 "trtype": "pcie", 00:08:12.907 "traddr": "0000:00:10.0", 00:08:12.907 "name": "Nvme0" 00:08:12.907 }, 00:08:12.907 "method": "bdev_nvme_attach_controller" 00:08:12.907 }, 00:08:12.907 { 00:08:12.907 "method": "bdev_wait_for_examine" 00:08:12.907 } 00:08:12.907 ] 00:08:12.907 } 00:08:12.907 ] 00:08:12.907 } 00:08:12.907 [2024-12-16 01:29:43.478124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.907 [2024-12-16 01:29:43.496402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.907 [2024-12-16 01:29:43.524801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.165  [2024-12-16T01:29:43.823Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:13.165 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.165 01:29:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.165 { 00:08:13.165 "subsystems": [ 00:08:13.165 { 00:08:13.165 "subsystem": "bdev", 00:08:13.165 "config": [ 00:08:13.165 { 00:08:13.165 "params": { 00:08:13.165 "trtype": "pcie", 00:08:13.165 "traddr": "0000:00:10.0", 00:08:13.165 "name": "Nvme0" 00:08:13.165 }, 00:08:13.165 "method": "bdev_nvme_attach_controller" 00:08:13.165 }, 00:08:13.165 { 00:08:13.165 "method": "bdev_wait_for_examine" 00:08:13.165 } 00:08:13.165 ] 00:08:13.165 } 00:08:13.165 ] 00:08:13.165 } 00:08:13.165 [2024-12-16 01:29:43.794512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:13.165 [2024-12-16 01:29:43.794632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74832 ] 00:08:13.424 [2024-12-16 01:29:43.940411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.424 [2024-12-16 01:29:43.958721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.424 [2024-12-16 01:29:43.986530] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.424  [2024-12-16T01:29:44.341Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:13.683 00:08:13.683 00:08:13.683 real 0m11.306s 00:08:13.683 user 0m8.356s 00:08:13.683 sys 0m3.579s 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.683 ************************************ 00:08:13.683 END TEST dd_rw 00:08:13.683 ************************************ 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.683 ************************************ 00:08:13.683 START TEST dd_rw_offset 00:08:13.683 ************************************ 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=pj6d9p7xaufth58sd63qwp9h880ehnbgqw5vpvozv7q8nskbxo2t3nyu49k6ql7ohx6p1zcpu57856n3q0huro3iuo983ayyyj4pd8epxdiefid5g26p9udlr2prwep9zb3ezs6u3gevugwzeg77pcotre5lc11ckllfk7pply9b3c5dqs90435fx5kiz7qo9y62u7tucrwc209ednvz59eoiw2vd4kk2bo3acw0dv22gww6ysn27w0kcpd0zr8ktqczsvdjzo8msg74mby0zqtnmwk7axxfxgb3tor2t3i16k0aqdwc1bmlsgf9wmb2ja8saph4noy7hsjwzq4taqhsyxqs6jkiqi1u09vfmgzmicsx5yqvre0qfqiuqh3lypzlucmz4dw6gqxzbzowuz6x61o2wirrjjsjvzj1t6e2ie4oh8pj1hekpz53ynzl0hahjy4vawzv7rlkbdg9zqa4a38dbqo3th2qhxtfie2ok4tpv38r3idtn4x0luyy7b3w5d9okwka35ezoae780dtnb0aslkupvb1rn0qjujht2qtaa05bsv160mr7m05xv3kjk2dmwcwqu643gmlp2ksp3gnupuzzayq2llj46pzaraum8utirbhocas1xubqcrausemjbpjwfy5twm27ydulugw64xmrgfc38eyrxaufyyf3e4chi5jnztxejfsmsbjl4uxrw3zgkdnvugjvewpopy36d9hixz7oy79nxfxyw6ev30qx2fpecde3p2pekb6srn08x8st8md4pfc62etd8npjse72ek71qikitlmumf3l1ksdf1vwgyodvm5q1iqn3s1busfugptzn8mh8k453h3mo2xpja98jfm71q3jj2rirg1emif50mate5x10tyxjdqomy1kpujggzflmqe0s9pif9ycfdfvoqk4qnhu6uahjcn23dh4fiu1agnv4b4rhapkrahe9hfklmlclm6jdszyeyva7veeklhpi2wz2ogc3i4906x90gf8qinxrpayvrdk4uohppck71svg2ajpi9vatcpsz88if3tcqj0ppfzozopm7ffbcq429w2n4z9pawyeqisa6o8ibhpg5fs2fbe9yqoygse1dzx94c7za51a4zq3r97omy6fmni9awoz8kzq3al42tihulhf305a1d7r5w7eiw9qrjdx92qmhm8gwxqqyqfx17wgsbgbn5x3wnvima5yqgui7m1yoi5wshmspvn35j5tryebqdrnummgbjcoryevznx7j1c65o82t6vhb56g1sb8utg12jnkitmg089rjhfqswgdpp5usyctm5ynnuzi919cjtxoeyu0w2rymilgrp1pregfmcj9c2hg0vrko9n3153sze41an65ycyqqten4tpy3rk10k6r42rkj0iww83f19wlo76nwgin1eq2moca5eid2ybhr4hexg48v304194r5dzi1yxsi3o485aboiwxolhegx96f1pxle6avwm4csjp89tsww38jl2sehb0j94uycasykp0fad0ifwfcsf3qo5virftqtlgspzqqjux0h98pfq8k9dbx4duboy619sf8pqwq5jk185dwu7ix49pvmbjdrk9oc1wxfxb7te588dfbq8wiuma9chub1de99hxqza9hc805njn7wihrv64t5o8w6xmt0p45w9xhidbp5cygb732i6asqgajvie6j7yzul2b8bs23q2wb37os28qn71x6nk58znk6m6j2g7jax6miusfz4an63lmxl13lbingijxt1f3ohyuyo0564wldge0g2zmkh8texljiv69yep59u2u4h3qwdvbjy04elr9wr4n1kesr165sv37s5tkcjxlit5l8u1hdgqw6yvt5srb0y8l7qmmf3kp8jfvz3ao147p10op0ie9xvb7fx36cgnko38l09b625zlq0dhuxq8e20k9lm2lhzqw4qb1cs1bqkvqit5nckmcteflxs7ujalbdjzwo1bs3gbph0x646oqwix1ffedih2k3kf13kda7j4f2iv161fz56m6nne2pwoz0w50tn95dtq32atsnqawyg1wvz6w8fhr1id4sbjpt0tsq114z3a1pp99zyf9udc7r5xdqlqerqgvdgwlqhgnhjogglbpo4q4cf4g8p030rgv008hfiuuvurz59wbrjb4k1kez484cu6qrakm273e6g3ysc77vfuq6w7yyhw1bj8qkz2jzwilixwl198xdd7m4rdp578qioijruxko0a05stya0aiunr5ajaqdqve52o6r5s88fv00nqh1mcmwibwvgtfc10bjvyirgr80t3hu43cwqtczrb5oqbt2aizyxd7a8q7tpm1te2taczi6cadp2l6dv1ck1mh92sved9ejtuw35nvr873ub6odnpu7ydtd8wwver2w1p4wamjawfx45fx8dsppa08mo72k4j7umnkgfbsfl3lv9q5wo6z9xyp6zjfy4vauvmm5h8zn0q90gpafs7ugl6a4crn62o5tvu8o1yq9tgml5jucyji3c65k3crhrfwmvc1nxc6rkhww0q1tjyfq3aiv6y4rdzmrlgjtu1ptrhu9icniitchca1jh6xgy1fx36dohmr1cxo7tapitobyzuz09cym4zj28e5wiwwp1cekuhjppwwv2iygxqj0oazh474lkfjxs2qa8ouudkzbm8zh6m40tpsdcyc59nvn753n9ec65xz5pcccysf78u3gsh1lz4sbioo0uw53u62un5rdc5ffo0u91lz29qn64325ychcdbbjuomq3rj2q4f6qrc814bfguogxj7kr4eah2o8u4979bnygcmucgfiq3gka2cwb55uocfrc24d79mvbz2moii6e5un28e648i98rsro8dkqsorafgyk192yjkyl7ihfsnuldxcogvpp3th9bbq6jk74k5j83wnujn2saul4oy38j7ivj2j0okx0kmtx9jcof0p2xooubl7qwxupn2n5n20nb6b6iaqe29iyy6ah3tq7smx27898otor1qvjo3jarlrgxit66supyrnnp5gxgenmxmkvogz9kca2lz8f2x4k347nb6yvt3mua7s4ip82qr7u4ptx5nund0onasuf1cju6qwpvsv9dfoa9dmckjfy12afjqclel1zueyzwziolv72ev8tc3b05c26ng7yv94sqljaeou4dt5ohj0gtyfiakif88q4u73uca8uks57tmdevqvzelj6klojrw17w19mbs6lv2swpmades455tv04jk5l9d3wlwsjj4tvyjiwhnq13kdbs65p5jizyggn9oldtfv5t9noy99qkf2yooobo3ufdu7agscl5vfwzbzpyuhkv313db0hqee8n02ehno243x9xn17a8sqm9rbxjieuyeamfxckryg3jgrxqp819sucbw5jnpwi4y37puafnd4hh9rk85ea3jm6dkneljkg5rz6x38as5g6d002i8f9xegojt6tcyptnyi5ii7chrf6mxubyh6gxg2rm2qu4ytvn6mubebpe1ecsud9chh8c560r7qqehkmu2kzqizqfs4gha84e4tdm0mp840yzbt0n984w9r7gl5nj6rxuyhcjp41z03uei1hy93agsozbkj93wsbhfdorjsiitqducru7vfy8f2zjv9uaapv6134kqoteohdxhl39ejtskqf3qq77ct3aq8ukt1gzqbzqik1dzodby1mipgn4usf37xkqgu6a32784btnsyxa1lk7g2v7itcqwy1ki7tegukxpqeqyvkq4iq5s8jzycm54xq5f0nhum4nahdyt0qfbdi8vlp82spkixb6lp2u436k625qth9tv5cu03klt74hsdblgz92qojd9vz4yabc9ihm7ycieuo83w0e9recj5c9xxnwfn400k4b5cqzsrh8lyode3vtrwk074bmgaub027qwgnhh7gocqcqhgpgbu7mcx3bfuoh6myf41jc8umo3jkdc8gx4x7lyo17fuoh7oetf9xsv0mzxkk6necb31nqxc0tucn7lgt5p2l1oycxr7b 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:13.683 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:13.942 [2024-12-16 01:29:44.354027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:13.942 [2024-12-16 01:29:44.354125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74862 ] 00:08:13.942 { 00:08:13.942 "subsystems": [ 00:08:13.942 { 00:08:13.942 "subsystem": "bdev", 00:08:13.942 "config": [ 00:08:13.942 { 00:08:13.942 "params": { 00:08:13.942 "trtype": "pcie", 00:08:13.942 "traddr": "0000:00:10.0", 00:08:13.942 "name": "Nvme0" 00:08:13.942 }, 00:08:13.942 "method": "bdev_nvme_attach_controller" 00:08:13.942 }, 00:08:13.942 { 00:08:13.942 "method": "bdev_wait_for_examine" 00:08:13.942 } 00:08:13.942 ] 00:08:13.942 } 00:08:13.942 ] 00:08:13.942 } 00:08:13.942 [2024-12-16 01:29:44.499204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.942 [2024-12-16 01:29:44.519244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.942 [2024-12-16 01:29:44.547615] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.201  [2024-12-16T01:29:44.859Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:14.201 00:08:14.201 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:14.201 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:14.201 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:14.201 01:29:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:14.201 [2024-12-16 01:29:44.811138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:14.201 [2024-12-16 01:29:44.811234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74876 ] 00:08:14.201 { 00:08:14.201 "subsystems": [ 00:08:14.201 { 00:08:14.201 "subsystem": "bdev", 00:08:14.201 "config": [ 00:08:14.201 { 00:08:14.201 "params": { 00:08:14.201 "trtype": "pcie", 00:08:14.201 "traddr": "0000:00:10.0", 00:08:14.201 "name": "Nvme0" 00:08:14.201 }, 00:08:14.201 "method": "bdev_nvme_attach_controller" 00:08:14.201 }, 00:08:14.201 { 00:08:14.201 "method": "bdev_wait_for_examine" 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 } 00:08:14.460 [2024-12-16 01:29:44.956076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.460 [2024-12-16 01:29:44.975022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.460 [2024-12-16 01:29:45.002902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.460  [2024-12-16T01:29:45.377Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:14.719 00:08:14.719 01:29:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ pj6d9p7xaufth58sd63qwp9h880ehnbgqw5vpvozv7q8nskbxo2t3nyu49k6ql7ohx6p1zcpu57856n3q0huro3iuo983ayyyj4pd8epxdiefid5g26p9udlr2prwep9zb3ezs6u3gevugwzeg77pcotre5lc11ckllfk7pply9b3c5dqs90435fx5kiz7qo9y62u7tucrwc209ednvz59eoiw2vd4kk2bo3acw0dv22gww6ysn27w0kcpd0zr8ktqczsvdjzo8msg74mby0zqtnmwk7axxfxgb3tor2t3i16k0aqdwc1bmlsgf9wmb2ja8saph4noy7hsjwzq4taqhsyxqs6jkiqi1u09vfmgzmicsx5yqvre0qfqiuqh3lypzlucmz4dw6gqxzbzowuz6x61o2wirrjjsjvzj1t6e2ie4oh8pj1hekpz53ynzl0hahjy4vawzv7rlkbdg9zqa4a38dbqo3th2qhxtfie2ok4tpv38r3idtn4x0luyy7b3w5d9okwka35ezoae780dtnb0aslkupvb1rn0qjujht2qtaa05bsv160mr7m05xv3kjk2dmwcwqu643gmlp2ksp3gnupuzzayq2llj46pzaraum8utirbhocas1xubqcrausemjbpjwfy5twm27ydulugw64xmrgfc38eyrxaufyyf3e4chi5jnztxejfsmsbjl4uxrw3zgkdnvugjvewpopy36d9hixz7oy79nxfxyw6ev30qx2fpecde3p2pekb6srn08x8st8md4pfc62etd8npjse72ek71qikitlmumf3l1ksdf1vwgyodvm5q1iqn3s1busfugptzn8mh8k453h3mo2xpja98jfm71q3jj2rirg1emif50mate5x10tyxjdqomy1kpujggzflmqe0s9pif9ycfdfvoqk4qnhu6uahjcn23dh4fiu1agnv4b4rhapkrahe9hfklmlclm6jdszyeyva7veeklhpi2wz2ogc3i4906x90gf8qinxrpayvrdk4uohppck71svg2ajpi9vatcpsz88if3tcqj0ppfzozopm7ffbcq429w2n4z9pawyeqisa6o8ibhpg5fs2fbe9yqoygse1dzx94c7za51a4zq3r97omy6fmni9awoz8kzq3al42tihulhf305a1d7r5w7eiw9qrjdx92qmhm8gwxqqyqfx17wgsbgbn5x3wnvima5yqgui7m1yoi5wshmspvn35j5tryebqdrnummgbjcoryevznx7j1c65o82t6vhb56g1sb8utg12jnkitmg089rjhfqswgdpp5usyctm5ynnuzi919cjtxoeyu0w2rymilgrp1pregfmcj9c2hg0vrko9n3153sze41an65ycyqqten4tpy3rk10k6r42rkj0iww83f19wlo76nwgin1eq2moca5eid2ybhr4hexg48v304194r5dzi1yxsi3o485aboiwxolhegx96f1pxle6avwm4csjp89tsww38jl2sehb0j94uycasykp0fad0ifwfcsf3qo5virftqtlgspzqqjux0h98pfq8k9dbx4duboy619sf8pqwq5jk185dwu7ix49pvmbjdrk9oc1wxfxb7te588dfbq8wiuma9chub1de99hxqza9hc805njn7wihrv64t5o8w6xmt0p45w9xhidbp5cygb732i6asqgajvie6j7yzul2b8bs23q2wb37os28qn71x6nk58znk6m6j2g7jax6miusfz4an63lmxl13lbingijxt1f3ohyuyo0564wldge0g2zmkh8texljiv69yep59u2u4h3qwdvbjy04elr9wr4n1kesr165sv37s5tkcjxlit5l8u1hdgqw6yvt5srb0y8l7qmmf3kp8jfvz3ao147p10op0ie9xvb7fx36cgnko38l09b625zlq0dhuxq8e20k9lm2lhzqw4qb1cs1bqkvqit5nckmcteflxs7ujalbdjzwo1bs3gbph0x646oqwix1ffedih2k3kf13kda7j4f2iv161fz56m6nne2pwoz0w50tn95dtq32atsnqawyg1wvz6w8fhr1id4sbjpt0tsq114z3a1pp99zyf9udc7r5xdqlqerqgvdgwlqhgnhjogglbpo4q4cf4g8p030rgv008hfiuuvurz59wbrjb4k1kez484cu6qrakm273e6g3ysc77vfuq6w7yyhw1bj8qkz2jzwilixwl198xdd7m4rdp578qioijruxko0a05stya0aiunr5ajaqdqve52o6r5s88fv00nqh1mcmwibwvgtfc10bjvyirgr80t3hu43cwqtczrb5oqbt2aizyxd7a8q7tpm1te2taczi6cadp2l6dv1ck1mh92sved9ejtuw35nvr873ub6odnpu7ydtd8wwver2w1p4wamjawfx45fx8dsppa08mo72k4j7umnkgfbsfl3lv9q5wo6z9xyp6zjfy4vauvmm5h8zn0q90gpafs7ugl6a4crn62o5tvu8o1yq9tgml5jucyji3c65k3crhrfwmvc1nxc6rkhww0q1tjyfq3aiv6y4rdzmrlgjtu1ptrhu9icniitchca1jh6xgy1fx36dohmr1cxo7tapitobyzuz09cym4zj28e5wiwwp1cekuhjppwwv2iygxqj0oazh474lkfjxs2qa8ouudkzbm8zh6m40tpsdcyc59nvn753n9ec65xz5pcccysf78u3gsh1lz4sbioo0uw53u62un5rdc5ffo0u91lz29qn64325ychcdbbjuomq3rj2q4f6qrc814bfguogxj7kr4eah2o8u4979bnygcmucgfiq3gka2cwb55uocfrc24d79mvbz2moii6e5un28e648i98rsro8dkqsorafgyk192yjkyl7ihfsnuldxcogvpp3th9bbq6jk74k5j83wnujn2saul4oy38j7ivj2j0okx0kmtx9jcof0p2xooubl7qwxupn2n5n20nb6b6iaqe29iyy6ah3tq7smx27898otor1qvjo3jarlrgxit66supyrnnp5gxgenmxmkvogz9kca2lz8f2x4k347nb6yvt3mua7s4ip82qr7u4ptx5nund0onasuf1cju6qwpvsv9dfoa9dmckjfy12afjqclel1zueyzwziolv72ev8tc3b05c26ng7yv94sqljaeou4dt5ohj0gtyfiakif88q4u73uca8uks57tmdevqvzelj6klojrw17w19mbs6lv2swpmades455tv04jk5l9d3wlwsjj4tvyjiwhnq13kdbs65p5jizyggn9oldtfv5t9noy99qkf2yooobo3ufdu7agscl5vfwzbzpyuhkv313db0hqee8n02ehno243x9xn17a8sqm9rbxjieuyeamfxckryg3jgrxqp819sucbw5jnpwi4y37puafnd4hh9rk85ea3jm6dkneljkg5rz6x38as5g6d002i8f9xegojt6tcyptnyi5ii7chrf6mxubyh6gxg2rm2qu4ytvn6mubebpe1ecsud9chh8c560r7qqehkmu2kzqizqfs4gha84e4tdm0mp840yzbt0n984w9r7gl5nj6rxuyhcjp41z03uei1hy93agsozbkj93wsbhfdorjsiitqducru7vfy8f2zjv9uaapv6134kqoteohdxhl39ejtskqf3qq77ct3aq8ukt1gzqbzqik1dzodby1mipgn4usf37xkqgu6a32784btnsyxa1lk7g2v7itcqwy1ki7tegukxpqeqyvkq4iq5s8jzycm54xq5f0nhum4nahdyt0qfbdi8vlp82spkixb6lp2u436k625qth9tv5cu03klt74hsdblgz92qojd9vz4yabc9ihm7ycieuo83w0e9recj5c9xxnwfn400k4b5cqzsrh8lyode3vtrwk074bmgaub027qwgnhh7gocqcqhgpgbu7mcx3bfuoh6myf41jc8umo3jkdc8gx4x7lyo17fuoh7oetf9xsv0mzxkk6necb31nqxc0tucn7lgt5p2l1oycxr7b == \p\j\6\d\9\p\7\x\a\u\f\t\h\5\8\s\d\6\3\q\w\p\9\h\8\8\0\e\h\n\b\g\q\w\5\v\p\v\o\z\v\7\q\8\n\s\k\b\x\o\2\t\3\n\y\u\4\9\k\6\q\l\7\o\h\x\6\p\1\z\c\p\u\5\7\8\5\6\n\3\q\0\h\u\r\o\3\i\u\o\9\8\3\a\y\y\y\j\4\p\d\8\e\p\x\d\i\e\f\i\d\5\g\2\6\p\9\u\d\l\r\2\p\r\w\e\p\9\z\b\3\e\z\s\6\u\3\g\e\v\u\g\w\z\e\g\7\7\p\c\o\t\r\e\5\l\c\1\1\c\k\l\l\f\k\7\p\p\l\y\9\b\3\c\5\d\q\s\9\0\4\3\5\f\x\5\k\i\z\7\q\o\9\y\6\2\u\7\t\u\c\r\w\c\2\0\9\e\d\n\v\z\5\9\e\o\i\w\2\v\d\4\k\k\2\b\o\3\a\c\w\0\d\v\2\2\g\w\w\6\y\s\n\2\7\w\0\k\c\p\d\0\z\r\8\k\t\q\c\z\s\v\d\j\z\o\8\m\s\g\7\4\m\b\y\0\z\q\t\n\m\w\k\7\a\x\x\f\x\g\b\3\t\o\r\2\t\3\i\1\6\k\0\a\q\d\w\c\1\b\m\l\s\g\f\9\w\m\b\2\j\a\8\s\a\p\h\4\n\o\y\7\h\s\j\w\z\q\4\t\a\q\h\s\y\x\q\s\6\j\k\i\q\i\1\u\0\9\v\f\m\g\z\m\i\c\s\x\5\y\q\v\r\e\0\q\f\q\i\u\q\h\3\l\y\p\z\l\u\c\m\z\4\d\w\6\g\q\x\z\b\z\o\w\u\z\6\x\6\1\o\2\w\i\r\r\j\j\s\j\v\z\j\1\t\6\e\2\i\e\4\o\h\8\p\j\1\h\e\k\p\z\5\3\y\n\z\l\0\h\a\h\j\y\4\v\a\w\z\v\7\r\l\k\b\d\g\9\z\q\a\4\a\3\8\d\b\q\o\3\t\h\2\q\h\x\t\f\i\e\2\o\k\4\t\p\v\3\8\r\3\i\d\t\n\4\x\0\l\u\y\y\7\b\3\w\5\d\9\o\k\w\k\a\3\5\e\z\o\a\e\7\8\0\d\t\n\b\0\a\s\l\k\u\p\v\b\1\r\n\0\q\j\u\j\h\t\2\q\t\a\a\0\5\b\s\v\1\6\0\m\r\7\m\0\5\x\v\3\k\j\k\2\d\m\w\c\w\q\u\6\4\3\g\m\l\p\2\k\s\p\3\g\n\u\p\u\z\z\a\y\q\2\l\l\j\4\6\p\z\a\r\a\u\m\8\u\t\i\r\b\h\o\c\a\s\1\x\u\b\q\c\r\a\u\s\e\m\j\b\p\j\w\f\y\5\t\w\m\2\7\y\d\u\l\u\g\w\6\4\x\m\r\g\f\c\3\8\e\y\r\x\a\u\f\y\y\f\3\e\4\c\h\i\5\j\n\z\t\x\e\j\f\s\m\s\b\j\l\4\u\x\r\w\3\z\g\k\d\n\v\u\g\j\v\e\w\p\o\p\y\3\6\d\9\h\i\x\z\7\o\y\7\9\n\x\f\x\y\w\6\e\v\3\0\q\x\2\f\p\e\c\d\e\3\p\2\p\e\k\b\6\s\r\n\0\8\x\8\s\t\8\m\d\4\p\f\c\6\2\e\t\d\8\n\p\j\s\e\7\2\e\k\7\1\q\i\k\i\t\l\m\u\m\f\3\l\1\k\s\d\f\1\v\w\g\y\o\d\v\m\5\q\1\i\q\n\3\s\1\b\u\s\f\u\g\p\t\z\n\8\m\h\8\k\4\5\3\h\3\m\o\2\x\p\j\a\9\8\j\f\m\7\1\q\3\j\j\2\r\i\r\g\1\e\m\i\f\5\0\m\a\t\e\5\x\1\0\t\y\x\j\d\q\o\m\y\1\k\p\u\j\g\g\z\f\l\m\q\e\0\s\9\p\i\f\9\y\c\f\d\f\v\o\q\k\4\q\n\h\u\6\u\a\h\j\c\n\2\3\d\h\4\f\i\u\1\a\g\n\v\4\b\4\r\h\a\p\k\r\a\h\e\9\h\f\k\l\m\l\c\l\m\6\j\d\s\z\y\e\y\v\a\7\v\e\e\k\l\h\p\i\2\w\z\2\o\g\c\3\i\4\9\0\6\x\9\0\g\f\8\q\i\n\x\r\p\a\y\v\r\d\k\4\u\o\h\p\p\c\k\7\1\s\v\g\2\a\j\p\i\9\v\a\t\c\p\s\z\8\8\i\f\3\t\c\q\j\0\p\p\f\z\o\z\o\p\m\7\f\f\b\c\q\4\2\9\w\2\n\4\z\9\p\a\w\y\e\q\i\s\a\6\o\8\i\b\h\p\g\5\f\s\2\f\b\e\9\y\q\o\y\g\s\e\1\d\z\x\9\4\c\7\z\a\5\1\a\4\z\q\3\r\9\7\o\m\y\6\f\m\n\i\9\a\w\o\z\8\k\z\q\3\a\l\4\2\t\i\h\u\l\h\f\3\0\5\a\1\d\7\r\5\w\7\e\i\w\9\q\r\j\d\x\9\2\q\m\h\m\8\g\w\x\q\q\y\q\f\x\1\7\w\g\s\b\g\b\n\5\x\3\w\n\v\i\m\a\5\y\q\g\u\i\7\m\1\y\o\i\5\w\s\h\m\s\p\v\n\3\5\j\5\t\r\y\e\b\q\d\r\n\u\m\m\g\b\j\c\o\r\y\e\v\z\n\x\7\j\1\c\6\5\o\8\2\t\6\v\h\b\5\6\g\1\s\b\8\u\t\g\1\2\j\n\k\i\t\m\g\0\8\9\r\j\h\f\q\s\w\g\d\p\p\5\u\s\y\c\t\m\5\y\n\n\u\z\i\9\1\9\c\j\t\x\o\e\y\u\0\w\2\r\y\m\i\l\g\r\p\1\p\r\e\g\f\m\c\j\9\c\2\h\g\0\v\r\k\o\9\n\3\1\5\3\s\z\e\4\1\a\n\6\5\y\c\y\q\q\t\e\n\4\t\p\y\3\r\k\1\0\k\6\r\4\2\r\k\j\0\i\w\w\8\3\f\1\9\w\l\o\7\6\n\w\g\i\n\1\e\q\2\m\o\c\a\5\e\i\d\2\y\b\h\r\4\h\e\x\g\4\8\v\3\0\4\1\9\4\r\5\d\z\i\1\y\x\s\i\3\o\4\8\5\a\b\o\i\w\x\o\l\h\e\g\x\9\6\f\1\p\x\l\e\6\a\v\w\m\4\c\s\j\p\8\9\t\s\w\w\3\8\j\l\2\s\e\h\b\0\j\9\4\u\y\c\a\s\y\k\p\0\f\a\d\0\i\f\w\f\c\s\f\3\q\o\5\v\i\r\f\t\q\t\l\g\s\p\z\q\q\j\u\x\0\h\9\8\p\f\q\8\k\9\d\b\x\4\d\u\b\o\y\6\1\9\s\f\8\p\q\w\q\5\j\k\1\8\5\d\w\u\7\i\x\4\9\p\v\m\b\j\d\r\k\9\o\c\1\w\x\f\x\b\7\t\e\5\8\8\d\f\b\q\8\w\i\u\m\a\9\c\h\u\b\1\d\e\9\9\h\x\q\z\a\9\h\c\8\0\5\n\j\n\7\w\i\h\r\v\6\4\t\5\o\8\w\6\x\m\t\0\p\4\5\w\9\x\h\i\d\b\p\5\c\y\g\b\7\3\2\i\6\a\s\q\g\a\j\v\i\e\6\j\7\y\z\u\l\2\b\8\b\s\2\3\q\2\w\b\3\7\o\s\2\8\q\n\7\1\x\6\n\k\5\8\z\n\k\6\m\6\j\2\g\7\j\a\x\6\m\i\u\s\f\z\4\a\n\6\3\l\m\x\l\1\3\l\b\i\n\g\i\j\x\t\1\f\3\o\h\y\u\y\o\0\5\6\4\w\l\d\g\e\0\g\2\z\m\k\h\8\t\e\x\l\j\i\v\6\9\y\e\p\5\9\u\2\u\4\h\3\q\w\d\v\b\j\y\0\4\e\l\r\9\w\r\4\n\1\k\e\s\r\1\6\5\s\v\3\7\s\5\t\k\c\j\x\l\i\t\5\l\8\u\1\h\d\g\q\w\6\y\v\t\5\s\r\b\0\y\8\l\7\q\m\m\f\3\k\p\8\j\f\v\z\3\a\o\1\4\7\p\1\0\o\p\0\i\e\9\x\v\b\7\f\x\3\6\c\g\n\k\o\3\8\l\0\9\b\6\2\5\z\l\q\0\d\h\u\x\q\8\e\2\0\k\9\l\m\2\l\h\z\q\w\4\q\b\1\c\s\1\b\q\k\v\q\i\t\5\n\c\k\m\c\t\e\f\l\x\s\7\u\j\a\l\b\d\j\z\w\o\1\b\s\3\g\b\p\h\0\x\6\4\6\o\q\w\i\x\1\f\f\e\d\i\h\2\k\3\k\f\1\3\k\d\a\7\j\4\f\2\i\v\1\6\1\f\z\5\6\m\6\n\n\e\2\p\w\o\z\0\w\5\0\t\n\9\5\d\t\q\3\2\a\t\s\n\q\a\w\y\g\1\w\v\z\6\w\8\f\h\r\1\i\d\4\s\b\j\p\t\0\t\s\q\1\1\4\z\3\a\1\p\p\9\9\z\y\f\9\u\d\c\7\r\5\x\d\q\l\q\e\r\q\g\v\d\g\w\l\q\h\g\n\h\j\o\g\g\l\b\p\o\4\q\4\c\f\4\g\8\p\0\3\0\r\g\v\0\0\8\h\f\i\u\u\v\u\r\z\5\9\w\b\r\j\b\4\k\1\k\e\z\4\8\4\c\u\6\q\r\a\k\m\2\7\3\e\6\g\3\y\s\c\7\7\v\f\u\q\6\w\7\y\y\h\w\1\b\j\8\q\k\z\2\j\z\w\i\l\i\x\w\l\1\9\8\x\d\d\7\m\4\r\d\p\5\7\8\q\i\o\i\j\r\u\x\k\o\0\a\0\5\s\t\y\a\0\a\i\u\n\r\5\a\j\a\q\d\q\v\e\5\2\o\6\r\5\s\8\8\f\v\0\0\n\q\h\1\m\c\m\w\i\b\w\v\g\t\f\c\1\0\b\j\v\y\i\r\g\r\8\0\t\3\h\u\4\3\c\w\q\t\c\z\r\b\5\o\q\b\t\2\a\i\z\y\x\d\7\a\8\q\7\t\p\m\1\t\e\2\t\a\c\z\i\6\c\a\d\p\2\l\6\d\v\1\c\k\1\m\h\9\2\s\v\e\d\9\e\j\t\u\w\3\5\n\v\r\8\7\3\u\b\6\o\d\n\p\u\7\y\d\t\d\8\w\w\v\e\r\2\w\1\p\4\w\a\m\j\a\w\f\x\4\5\f\x\8\d\s\p\p\a\0\8\m\o\7\2\k\4\j\7\u\m\n\k\g\f\b\s\f\l\3\l\v\9\q\5\w\o\6\z\9\x\y\p\6\z\j\f\y\4\v\a\u\v\m\m\5\h\8\z\n\0\q\9\0\g\p\a\f\s\7\u\g\l\6\a\4\c\r\n\6\2\o\5\t\v\u\8\o\1\y\q\9\t\g\m\l\5\j\u\c\y\j\i\3\c\6\5\k\3\c\r\h\r\f\w\m\v\c\1\n\x\c\6\r\k\h\w\w\0\q\1\t\j\y\f\q\3\a\i\v\6\y\4\r\d\z\m\r\l\g\j\t\u\1\p\t\r\h\u\9\i\c\n\i\i\t\c\h\c\a\1\j\h\6\x\g\y\1\f\x\3\6\d\o\h\m\r\1\c\x\o\7\t\a\p\i\t\o\b\y\z\u\z\0\9\c\y\m\4\z\j\2\8\e\5\w\i\w\w\p\1\c\e\k\u\h\j\p\p\w\w\v\2\i\y\g\x\q\j\0\o\a\z\h\4\7\4\l\k\f\j\x\s\2\q\a\8\o\u\u\d\k\z\b\m\8\z\h\6\m\4\0\t\p\s\d\c\y\c\5\9\n\v\n\7\5\3\n\9\e\c\6\5\x\z\5\p\c\c\c\y\s\f\7\8\u\3\g\s\h\1\l\z\4\s\b\i\o\o\0\u\w\5\3\u\6\2\u\n\5\r\d\c\5\f\f\o\0\u\9\1\l\z\2\9\q\n\6\4\3\2\5\y\c\h\c\d\b\b\j\u\o\m\q\3\r\j\2\q\4\f\6\q\r\c\8\1\4\b\f\g\u\o\g\x\j\7\k\r\4\e\a\h\2\o\8\u\4\9\7\9\b\n\y\g\c\m\u\c\g\f\i\q\3\g\k\a\2\c\w\b\5\5\u\o\c\f\r\c\2\4\d\7\9\m\v\b\z\2\m\o\i\i\6\e\5\u\n\2\8\e\6\4\8\i\9\8\r\s\r\o\8\d\k\q\s\o\r\a\f\g\y\k\1\9\2\y\j\k\y\l\7\i\h\f\s\n\u\l\d\x\c\o\g\v\p\p\3\t\h\9\b\b\q\6\j\k\7\4\k\5\j\8\3\w\n\u\j\n\2\s\a\u\l\4\o\y\3\8\j\7\i\v\j\2\j\0\o\k\x\0\k\m\t\x\9\j\c\o\f\0\p\2\x\o\o\u\b\l\7\q\w\x\u\p\n\2\n\5\n\2\0\n\b\6\b\6\i\a\q\e\2\9\i\y\y\6\a\h\3\t\q\7\s\m\x\2\7\8\9\8\o\t\o\r\1\q\v\j\o\3\j\a\r\l\r\g\x\i\t\6\6\s\u\p\y\r\n\n\p\5\g\x\g\e\n\m\x\m\k\v\o\g\z\9\k\c\a\2\l\z\8\f\2\x\4\k\3\4\7\n\b\6\y\v\t\3\m\u\a\7\s\4\i\p\8\2\q\r\7\u\4\p\t\x\5\n\u\n\d\0\o\n\a\s\u\f\1\c\j\u\6\q\w\p\v\s\v\9\d\f\o\a\9\d\m\c\k\j\f\y\1\2\a\f\j\q\c\l\e\l\1\z\u\e\y\z\w\z\i\o\l\v\7\2\e\v\8\t\c\3\b\0\5\c\2\6\n\g\7\y\v\9\4\s\q\l\j\a\e\o\u\4\d\t\5\o\h\j\0\g\t\y\f\i\a\k\i\f\8\8\q\4\u\7\3\u\c\a\8\u\k\s\5\7\t\m\d\e\v\q\v\z\e\l\j\6\k\l\o\j\r\w\1\7\w\1\9\m\b\s\6\l\v\2\s\w\p\m\a\d\e\s\4\5\5\t\v\0\4\j\k\5\l\9\d\3\w\l\w\s\j\j\4\t\v\y\j\i\w\h\n\q\1\3\k\d\b\s\6\5\p\5\j\i\z\y\g\g\n\9\o\l\d\t\f\v\5\t\9\n\o\y\9\9\q\k\f\2\y\o\o\o\b\o\3\u\f\d\u\7\a\g\s\c\l\5\v\f\w\z\b\z\p\y\u\h\k\v\3\1\3\d\b\0\h\q\e\e\8\n\0\2\e\h\n\o\2\4\3\x\9\x\n\1\7\a\8\s\q\m\9\r\b\x\j\i\e\u\y\e\a\m\f\x\c\k\r\y\g\3\j\g\r\x\q\p\8\1\9\s\u\c\b\w\5\j\n\p\w\i\4\y\3\7\p\u\a\f\n\d\4\h\h\9\r\k\8\5\e\a\3\j\m\6\d\k\n\e\l\j\k\g\5\r\z\6\x\3\8\a\s\5\g\6\d\0\0\2\i\8\f\9\x\e\g\o\j\t\6\t\c\y\p\t\n\y\i\5\i\i\7\c\h\r\f\6\m\x\u\b\y\h\6\g\x\g\2\r\m\2\q\u\4\y\t\v\n\6\m\u\b\e\b\p\e\1\e\c\s\u\d\9\c\h\h\8\c\5\6\0\r\7\q\q\e\h\k\m\u\2\k\z\q\i\z\q\f\s\4\g\h\a\8\4\e\4\t\d\m\0\m\p\8\4\0\y\z\b\t\0\n\9\8\4\w\9\r\7\g\l\5\n\j\6\r\x\u\y\h\c\j\p\4\1\z\0\3\u\e\i\1\h\y\9\3\a\g\s\o\z\b\k\j\9\3\w\s\b\h\f\d\o\r\j\s\i\i\t\q\d\u\c\r\u\7\v\f\y\8\f\2\z\j\v\9\u\a\a\p\v\6\1\3\4\k\q\o\t\e\o\h\d\x\h\l\3\9\e\j\t\s\k\q\f\3\q\q\7\7\c\t\3\a\q\8\u\k\t\1\g\z\q\b\z\q\i\k\1\d\z\o\d\b\y\1\m\i\p\g\n\4\u\s\f\3\7\x\k\q\g\u\6\a\3\2\7\8\4\b\t\n\s\y\x\a\1\l\k\7\g\2\v\7\i\t\c\q\w\y\1\k\i\7\t\e\g\u\k\x\p\q\e\q\y\v\k\q\4\i\q\5\s\8\j\z\y\c\m\5\4\x\q\5\f\0\n\h\u\m\4\n\a\h\d\y\t\0\q\f\b\d\i\8\v\l\p\8\2\s\p\k\i\x\b\6\l\p\2\u\4\3\6\k\6\2\5\q\t\h\9\t\v\5\c\u\0\3\k\l\t\7\4\h\s\d\b\l\g\z\9\2\q\o\j\d\9\v\z\4\y\a\b\c\9\i\h\m\7\y\c\i\e\u\o\8\3\w\0\e\9\r\e\c\j\5\c\9\x\x\n\w\f\n\4\0\0\k\4\b\5\c\q\z\s\r\h\8\l\y\o\d\e\3\v\t\r\w\k\0\7\4\b\m\g\a\u\b\0\2\7\q\w\g\n\h\h\7\g\o\c\q\c\q\h\g\p\g\b\u\7\m\c\x\3\b\f\u\o\h\6\m\y\f\4\1\j\c\8\u\m\o\3\j\k\d\c\8\g\x\4\x\7\l\y\o\1\7\f\u\o\h\7\o\e\t\f\9\x\s\v\0\m\z\x\k\k\6\n\e\c\b\3\1\n\q\x\c\0\t\u\c\n\7\l\g\t\5\p\2\l\1\o\y\c\x\r\7\b ]] 00:08:14.720 00:08:14.720 real 0m0.965s 00:08:14.720 user 0m0.658s 00:08:14.720 sys 0m0.393s 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:14.720 ************************************ 00:08:14.720 END TEST dd_rw_offset 00:08:14.720 ************************************ 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:14.720 01:29:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:14.720 { 00:08:14.720 "subsystems": [ 00:08:14.720 { 00:08:14.720 "subsystem": "bdev", 00:08:14.720 "config": [ 00:08:14.720 { 00:08:14.720 "params": { 00:08:14.720 "trtype": "pcie", 00:08:14.720 "traddr": "0000:00:10.0", 00:08:14.720 "name": "Nvme0" 00:08:14.720 }, 00:08:14.720 "method": "bdev_nvme_attach_controller" 00:08:14.720 }, 00:08:14.720 { 00:08:14.720 "method": "bdev_wait_for_examine" 00:08:14.720 } 00:08:14.720 ] 00:08:14.720 } 00:08:14.720 ] 00:08:14.720 } 00:08:14.720 [2024-12-16 01:29:45.309767] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:14.720 [2024-12-16 01:29:45.309883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74905 ] 00:08:14.979 [2024-12-16 01:29:45.456498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.979 [2024-12-16 01:29:45.474850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.979 [2024-12-16 01:29:45.504122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.979  [2024-12-16T01:29:45.895Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:15.237 00:08:15.237 01:29:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.237 00:08:15.237 real 0m13.783s 00:08:15.237 user 0m9.896s 00:08:15.238 sys 0m4.481s 00:08:15.238 01:29:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.238 01:29:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.238 ************************************ 00:08:15.238 END TEST spdk_dd_basic_rw 00:08:15.238 ************************************ 00:08:15.238 01:29:45 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:15.238 01:29:45 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.238 01:29:45 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.238 01:29:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:15.238 ************************************ 00:08:15.238 START TEST spdk_dd_posix 00:08:15.238 ************************************ 00:08:15.238 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:15.238 * Looking for test storage... 00:08:15.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:15.238 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.238 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.238 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.497 --rc genhtml_branch_coverage=1 00:08:15.497 --rc genhtml_function_coverage=1 00:08:15.497 --rc genhtml_legend=1 00:08:15.497 --rc geninfo_all_blocks=1 00:08:15.497 --rc geninfo_unexecuted_blocks=1 00:08:15.497 00:08:15.497 ' 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.497 --rc genhtml_branch_coverage=1 00:08:15.497 --rc genhtml_function_coverage=1 00:08:15.497 --rc genhtml_legend=1 00:08:15.497 --rc geninfo_all_blocks=1 00:08:15.497 --rc geninfo_unexecuted_blocks=1 00:08:15.497 00:08:15.497 ' 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.497 --rc genhtml_branch_coverage=1 00:08:15.497 --rc genhtml_function_coverage=1 00:08:15.497 --rc genhtml_legend=1 00:08:15.497 --rc geninfo_all_blocks=1 00:08:15.497 --rc geninfo_unexecuted_blocks=1 00:08:15.497 00:08:15.497 ' 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.497 --rc genhtml_branch_coverage=1 00:08:15.497 --rc genhtml_function_coverage=1 00:08:15.497 --rc genhtml_legend=1 00:08:15.497 --rc geninfo_all_blocks=1 00:08:15.497 --rc geninfo_unexecuted_blocks=1 00:08:15.497 00:08:15.497 ' 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.497 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:15.498 * First test run, liburing in use 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:15.498 ************************************ 00:08:15.498 START TEST dd_flag_append 00:08:15.498 ************************************ 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=r8g1nyxsjv4u7glwu6urw4p8xk2yh349 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=icttfwoiiq9q9haazzoqwzntznx86a1q 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s r8g1nyxsjv4u7glwu6urw4p8xk2yh349 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s icttfwoiiq9q9haazzoqwzntznx86a1q 00:08:15.498 01:29:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:15.498 [2024-12-16 01:29:46.039463] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:15.498 [2024-12-16 01:29:46.039753] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74972 ] 00:08:15.757 [2024-12-16 01:29:46.187880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.757 [2024-12-16 01:29:46.206672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.757 [2024-12-16 01:29:46.237337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.757  [2024-12-16T01:29:46.415Z] Copying: 32/32 [B] (average 31 kBps) 00:08:15.757 00:08:15.757 ************************************ 00:08:15.757 END TEST dd_flag_append 00:08:15.757 ************************************ 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ icttfwoiiq9q9haazzoqwzntznx86a1qr8g1nyxsjv4u7glwu6urw4p8xk2yh349 == \i\c\t\t\f\w\o\i\i\q\9\q\9\h\a\a\z\z\o\q\w\z\n\t\z\n\x\8\6\a\1\q\r\8\g\1\n\y\x\s\j\v\4\u\7\g\l\w\u\6\u\r\w\4\p\8\x\k\2\y\h\3\4\9 ]] 00:08:15.757 00:08:15.757 real 0m0.388s 00:08:15.757 user 0m0.184s 00:08:15.757 sys 0m0.172s 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:15.757 ************************************ 00:08:15.757 START TEST dd_flag_directory 00:08:15.757 ************************************ 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.757 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.016 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.016 [2024-12-16 01:29:46.463295] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:16.016 [2024-12-16 01:29:46.463379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75000 ] 00:08:16.016 [2024-12-16 01:29:46.602747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.016 [2024-12-16 01:29:46.621166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.016 [2024-12-16 01:29:46.648471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.016 [2024-12-16 01:29:46.664493] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:16.016 [2024-12-16 01:29:46.664573] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:16.016 [2024-12-16 01:29:46.664603] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.274 [2024-12-16 01:29:46.724093] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.274 01:29:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:16.275 [2024-12-16 01:29:46.831863] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:16.275 [2024-12-16 01:29:46.831956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75010 ] 00:08:16.534 [2024-12-16 01:29:46.977174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.534 [2024-12-16 01:29:46.995611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.534 [2024-12-16 01:29:47.022964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.534 [2024-12-16 01:29:47.038814] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:16.534 [2024-12-16 01:29:47.038867] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:16.534 [2024-12-16 01:29:47.038896] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:16.534 [2024-12-16 01:29:47.098911] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:16.534 ************************************ 00:08:16.534 END TEST dd_flag_directory 00:08:16.534 ************************************ 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:16.534 00:08:16.534 real 0m0.733s 00:08:16.534 user 0m0.356s 00:08:16.534 sys 0m0.170s 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.534 01:29:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:16.793 ************************************ 00:08:16.793 START TEST dd_flag_nofollow 00:08:16.793 ************************************ 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:16.793 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.793 [2024-12-16 01:29:47.269612] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:16.793 [2024-12-16 01:29:47.269696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75033 ] 00:08:16.793 [2024-12-16 01:29:47.417941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.793 [2024-12-16 01:29:47.436263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.052 [2024-12-16 01:29:47.464264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.052 [2024-12-16 01:29:47.480043] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:17.052 [2024-12-16 01:29:47.480104] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:17.052 [2024-12-16 01:29:47.480116] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.052 [2024-12-16 01:29:47.538835] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.052 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:17.052 [2024-12-16 01:29:47.649895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:17.052 [2024-12-16 01:29:47.649989] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75048 ] 00:08:17.311 [2024-12-16 01:29:47.797489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.311 [2024-12-16 01:29:47.816274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.311 [2024-12-16 01:29:47.843719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.311 [2024-12-16 01:29:47.859579] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:17.311 [2024-12-16 01:29:47.859642] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:17.311 [2024-12-16 01:29:47.859670] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:17.311 [2024-12-16 01:29:47.923085] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:17.569 01:29:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.569 [2024-12-16 01:29:48.040787] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:17.570 [2024-12-16 01:29:48.040881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75050 ] 00:08:17.570 [2024-12-16 01:29:48.185789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.570 [2024-12-16 01:29:48.203911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.828 [2024-12-16 01:29:48.233122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.828  [2024-12-16T01:29:48.486Z] Copying: 512/512 [B] (average 500 kBps) 00:08:17.828 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ fz5gshul4h9qs1qpv9n6zi0o5ow6431wg235ialock0jyaqlmj1kv8ex1117wcqk2j75lc1hdnzzhlfgr2owpv6nbxpwl7l3d1o3ltu60940rgjzkud58ifr8h4v4gay1okbc1vf5i7i80dfqz2z3nccq4ewrozk6bi848qawg4o7hvkr76gd6b4e2p2yxizgplt26w4tau3fluea2lrhoiafd5r10apk5zbcjoxzbhckh9t402fsr9msbo9xz6629ypxabdt6isqjhi8fri0cw5f6mr0v0mbbyh9m4yznnnlb4gi6ztida7q65q4gcr2pj0r7zk3fa74p9ct36mb89id5ecc4ar9tlit5h2m1ioh2s1qfgovcnsuon87y9ma46eg89u6y8v2rs3u3a26cm69jokk00v82bj5o8l3esypn4cy2ezdgaomyzt9wqnuwsurt2n3amm71xgwjvdqkl1oibmkhmalmz21p5rmyn5sz12emqdzo34u8gr7xba == \f\z\5\g\s\h\u\l\4\h\9\q\s\1\q\p\v\9\n\6\z\i\0\o\5\o\w\6\4\3\1\w\g\2\3\5\i\a\l\o\c\k\0\j\y\a\q\l\m\j\1\k\v\8\e\x\1\1\1\7\w\c\q\k\2\j\7\5\l\c\1\h\d\n\z\z\h\l\f\g\r\2\o\w\p\v\6\n\b\x\p\w\l\7\l\3\d\1\o\3\l\t\u\6\0\9\4\0\r\g\j\z\k\u\d\5\8\i\f\r\8\h\4\v\4\g\a\y\1\o\k\b\c\1\v\f\5\i\7\i\8\0\d\f\q\z\2\z\3\n\c\c\q\4\e\w\r\o\z\k\6\b\i\8\4\8\q\a\w\g\4\o\7\h\v\k\r\7\6\g\d\6\b\4\e\2\p\2\y\x\i\z\g\p\l\t\2\6\w\4\t\a\u\3\f\l\u\e\a\2\l\r\h\o\i\a\f\d\5\r\1\0\a\p\k\5\z\b\c\j\o\x\z\b\h\c\k\h\9\t\4\0\2\f\s\r\9\m\s\b\o\9\x\z\6\6\2\9\y\p\x\a\b\d\t\6\i\s\q\j\h\i\8\f\r\i\0\c\w\5\f\6\m\r\0\v\0\m\b\b\y\h\9\m\4\y\z\n\n\n\l\b\4\g\i\6\z\t\i\d\a\7\q\6\5\q\4\g\c\r\2\p\j\0\r\7\z\k\3\f\a\7\4\p\9\c\t\3\6\m\b\8\9\i\d\5\e\c\c\4\a\r\9\t\l\i\t\5\h\2\m\1\i\o\h\2\s\1\q\f\g\o\v\c\n\s\u\o\n\8\7\y\9\m\a\4\6\e\g\8\9\u\6\y\8\v\2\r\s\3\u\3\a\2\6\c\m\6\9\j\o\k\k\0\0\v\8\2\b\j\5\o\8\l\3\e\s\y\p\n\4\c\y\2\e\z\d\g\a\o\m\y\z\t\9\w\q\n\u\w\s\u\r\t\2\n\3\a\m\m\7\1\x\g\w\j\v\d\q\k\l\1\o\i\b\m\k\h\m\a\l\m\z\2\1\p\5\r\m\y\n\5\s\z\1\2\e\m\q\d\z\o\3\4\u\8\g\r\7\x\b\a ]] 00:08:17.828 00:08:17.828 real 0m1.161s 00:08:17.828 user 0m0.584s 00:08:17.828 sys 0m0.342s 00:08:17.828 ************************************ 00:08:17.828 END TEST dd_flag_nofollow 00:08:17.828 ************************************ 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 ************************************ 00:08:17.828 START TEST dd_flag_noatime 00:08:17.828 ************************************ 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1734312588 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1734312588 00:08:17.828 01:29:48 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:19.204 01:29:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.204 [2024-12-16 01:29:49.487608] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:19.204 [2024-12-16 01:29:49.487712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75092 ] 00:08:19.204 [2024-12-16 01:29:49.641211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.204 [2024-12-16 01:29:49.665267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.204 [2024-12-16 01:29:49.698404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.204  [2024-12-16T01:29:49.862Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.204 00:08:19.204 01:29:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.204 01:29:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1734312588 )) 00:08:19.204 01:29:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.204 01:29:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1734312588 )) 00:08:19.204 01:29:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.462 [2024-12-16 01:29:49.907156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:19.462 [2024-12-16 01:29:49.907258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75106 ] 00:08:19.462 [2024-12-16 01:29:50.053165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.462 [2024-12-16 01:29:50.071920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.462 [2024-12-16 01:29:50.099394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.462  [2024-12-16T01:29:50.379Z] Copying: 512/512 [B] (average 500 kBps) 00:08:19.721 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1734312590 )) 00:08:19.721 00:08:19.721 real 0m1.820s 00:08:19.721 user 0m0.402s 00:08:19.721 sys 0m0.365s 00:08:19.721 ************************************ 00:08:19.721 END TEST dd_flag_noatime 00:08:19.721 ************************************ 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:19.721 ************************************ 00:08:19.721 START TEST dd_flags_misc 00:08:19.721 ************************************ 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.721 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:19.721 [2024-12-16 01:29:50.343701] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:19.721 [2024-12-16 01:29:50.343791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75129 ] 00:08:19.981 [2024-12-16 01:29:50.489980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.981 [2024-12-16 01:29:50.509394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.981 [2024-12-16 01:29:50.536696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.981  [2024-12-16T01:29:50.897Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.239 00:08:20.240 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3v4z957birhi94zsqu1nge9hrl43ix18a9vwuwpuiboihvoo6sl6lp1wvir5628jgbbz6wipwq6b4gzma2a732jvh5m1ruv69wgfqylt0z440fk3u00a7wzyrbj1utxcgqmg461yt5penf1rd56m2yvzu48xas7ngmgd4kamcctqvs9glkjqu1jozfrktd41xya0gpnhgs6xobiaowde3u3mk1nalbjqvafjvzqqpznthkchakk4nku8axmzayxzorsajqvj2m786d66dyexg24x3slj69gb3uiv6njbli06gmryvj1w176fhhcaog2fztvurvjlluvwabq0k37ri3wuj03gsa8i0pi4a2qb0k839zrndxx0hkry8oey6tkzcpgs0tn814tj25jy1riy2wkmls25b6849pyujh9vcgpsxxql5726nkbom77yuguly2szmdct8ys3zp301ssdrax6j7li2uk9lpbw6h3cplhd6u0ebiwr3tsbicmq7cwp == \3\v\4\z\9\5\7\b\i\r\h\i\9\4\z\s\q\u\1\n\g\e\9\h\r\l\4\3\i\x\1\8\a\9\v\w\u\w\p\u\i\b\o\i\h\v\o\o\6\s\l\6\l\p\1\w\v\i\r\5\6\2\8\j\g\b\b\z\6\w\i\p\w\q\6\b\4\g\z\m\a\2\a\7\3\2\j\v\h\5\m\1\r\u\v\6\9\w\g\f\q\y\l\t\0\z\4\4\0\f\k\3\u\0\0\a\7\w\z\y\r\b\j\1\u\t\x\c\g\q\m\g\4\6\1\y\t\5\p\e\n\f\1\r\d\5\6\m\2\y\v\z\u\4\8\x\a\s\7\n\g\m\g\d\4\k\a\m\c\c\t\q\v\s\9\g\l\k\j\q\u\1\j\o\z\f\r\k\t\d\4\1\x\y\a\0\g\p\n\h\g\s\6\x\o\b\i\a\o\w\d\e\3\u\3\m\k\1\n\a\l\b\j\q\v\a\f\j\v\z\q\q\p\z\n\t\h\k\c\h\a\k\k\4\n\k\u\8\a\x\m\z\a\y\x\z\o\r\s\a\j\q\v\j\2\m\7\8\6\d\6\6\d\y\e\x\g\2\4\x\3\s\l\j\6\9\g\b\3\u\i\v\6\n\j\b\l\i\0\6\g\m\r\y\v\j\1\w\1\7\6\f\h\h\c\a\o\g\2\f\z\t\v\u\r\v\j\l\l\u\v\w\a\b\q\0\k\3\7\r\i\3\w\u\j\0\3\g\s\a\8\i\0\p\i\4\a\2\q\b\0\k\8\3\9\z\r\n\d\x\x\0\h\k\r\y\8\o\e\y\6\t\k\z\c\p\g\s\0\t\n\8\1\4\t\j\2\5\j\y\1\r\i\y\2\w\k\m\l\s\2\5\b\6\8\4\9\p\y\u\j\h\9\v\c\g\p\s\x\x\q\l\5\7\2\6\n\k\b\o\m\7\7\y\u\g\u\l\y\2\s\z\m\d\c\t\8\y\s\3\z\p\3\0\1\s\s\d\r\a\x\6\j\7\l\i\2\u\k\9\l\p\b\w\6\h\3\c\p\l\h\d\6\u\0\e\b\i\w\r\3\t\s\b\i\c\m\q\7\c\w\p ]] 00:08:20.240 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.240 01:29:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:20.240 [2024-12-16 01:29:50.717258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:20.240 [2024-12-16 01:29:50.717369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75144 ] 00:08:20.240 [2024-12-16 01:29:50.858897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.240 [2024-12-16 01:29:50.877275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.499 [2024-12-16 01:29:50.905769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.499  [2024-12-16T01:29:51.157Z] Copying: 512/512 [B] (average 500 kBps) 00:08:20.499 00:08:20.499 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3v4z957birhi94zsqu1nge9hrl43ix18a9vwuwpuiboihvoo6sl6lp1wvir5628jgbbz6wipwq6b4gzma2a732jvh5m1ruv69wgfqylt0z440fk3u00a7wzyrbj1utxcgqmg461yt5penf1rd56m2yvzu48xas7ngmgd4kamcctqvs9glkjqu1jozfrktd41xya0gpnhgs6xobiaowde3u3mk1nalbjqvafjvzqqpznthkchakk4nku8axmzayxzorsajqvj2m786d66dyexg24x3slj69gb3uiv6njbli06gmryvj1w176fhhcaog2fztvurvjlluvwabq0k37ri3wuj03gsa8i0pi4a2qb0k839zrndxx0hkry8oey6tkzcpgs0tn814tj25jy1riy2wkmls25b6849pyujh9vcgpsxxql5726nkbom77yuguly2szmdct8ys3zp301ssdrax6j7li2uk9lpbw6h3cplhd6u0ebiwr3tsbicmq7cwp == \3\v\4\z\9\5\7\b\i\r\h\i\9\4\z\s\q\u\1\n\g\e\9\h\r\l\4\3\i\x\1\8\a\9\v\w\u\w\p\u\i\b\o\i\h\v\o\o\6\s\l\6\l\p\1\w\v\i\r\5\6\2\8\j\g\b\b\z\6\w\i\p\w\q\6\b\4\g\z\m\a\2\a\7\3\2\j\v\h\5\m\1\r\u\v\6\9\w\g\f\q\y\l\t\0\z\4\4\0\f\k\3\u\0\0\a\7\w\z\y\r\b\j\1\u\t\x\c\g\q\m\g\4\6\1\y\t\5\p\e\n\f\1\r\d\5\6\m\2\y\v\z\u\4\8\x\a\s\7\n\g\m\g\d\4\k\a\m\c\c\t\q\v\s\9\g\l\k\j\q\u\1\j\o\z\f\r\k\t\d\4\1\x\y\a\0\g\p\n\h\g\s\6\x\o\b\i\a\o\w\d\e\3\u\3\m\k\1\n\a\l\b\j\q\v\a\f\j\v\z\q\q\p\z\n\t\h\k\c\h\a\k\k\4\n\k\u\8\a\x\m\z\a\y\x\z\o\r\s\a\j\q\v\j\2\m\7\8\6\d\6\6\d\y\e\x\g\2\4\x\3\s\l\j\6\9\g\b\3\u\i\v\6\n\j\b\l\i\0\6\g\m\r\y\v\j\1\w\1\7\6\f\h\h\c\a\o\g\2\f\z\t\v\u\r\v\j\l\l\u\v\w\a\b\q\0\k\3\7\r\i\3\w\u\j\0\3\g\s\a\8\i\0\p\i\4\a\2\q\b\0\k\8\3\9\z\r\n\d\x\x\0\h\k\r\y\8\o\e\y\6\t\k\z\c\p\g\s\0\t\n\8\1\4\t\j\2\5\j\y\1\r\i\y\2\w\k\m\l\s\2\5\b\6\8\4\9\p\y\u\j\h\9\v\c\g\p\s\x\x\q\l\5\7\2\6\n\k\b\o\m\7\7\y\u\g\u\l\y\2\s\z\m\d\c\t\8\y\s\3\z\p\3\0\1\s\s\d\r\a\x\6\j\7\l\i\2\u\k\9\l\p\b\w\6\h\3\c\p\l\h\d\6\u\0\e\b\i\w\r\3\t\s\b\i\c\m\q\7\c\w\p ]] 00:08:20.499 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.499 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:20.499 [2024-12-16 01:29:51.089595] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:20.499 [2024-12-16 01:29:51.089721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75148 ] 00:08:20.758 [2024-12-16 01:29:51.236321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.758 [2024-12-16 01:29:51.255747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.758 [2024-12-16 01:29:51.283167] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.758  [2024-12-16T01:29:51.416Z] Copying: 512/512 [B] (average 125 kBps) 00:08:20.758 00:08:20.758 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3v4z957birhi94zsqu1nge9hrl43ix18a9vwuwpuiboihvoo6sl6lp1wvir5628jgbbz6wipwq6b4gzma2a732jvh5m1ruv69wgfqylt0z440fk3u00a7wzyrbj1utxcgqmg461yt5penf1rd56m2yvzu48xas7ngmgd4kamcctqvs9glkjqu1jozfrktd41xya0gpnhgs6xobiaowde3u3mk1nalbjqvafjvzqqpznthkchakk4nku8axmzayxzorsajqvj2m786d66dyexg24x3slj69gb3uiv6njbli06gmryvj1w176fhhcaog2fztvurvjlluvwabq0k37ri3wuj03gsa8i0pi4a2qb0k839zrndxx0hkry8oey6tkzcpgs0tn814tj25jy1riy2wkmls25b6849pyujh9vcgpsxxql5726nkbom77yuguly2szmdct8ys3zp301ssdrax6j7li2uk9lpbw6h3cplhd6u0ebiwr3tsbicmq7cwp == \3\v\4\z\9\5\7\b\i\r\h\i\9\4\z\s\q\u\1\n\g\e\9\h\r\l\4\3\i\x\1\8\a\9\v\w\u\w\p\u\i\b\o\i\h\v\o\o\6\s\l\6\l\p\1\w\v\i\r\5\6\2\8\j\g\b\b\z\6\w\i\p\w\q\6\b\4\g\z\m\a\2\a\7\3\2\j\v\h\5\m\1\r\u\v\6\9\w\g\f\q\y\l\t\0\z\4\4\0\f\k\3\u\0\0\a\7\w\z\y\r\b\j\1\u\t\x\c\g\q\m\g\4\6\1\y\t\5\p\e\n\f\1\r\d\5\6\m\2\y\v\z\u\4\8\x\a\s\7\n\g\m\g\d\4\k\a\m\c\c\t\q\v\s\9\g\l\k\j\q\u\1\j\o\z\f\r\k\t\d\4\1\x\y\a\0\g\p\n\h\g\s\6\x\o\b\i\a\o\w\d\e\3\u\3\m\k\1\n\a\l\b\j\q\v\a\f\j\v\z\q\q\p\z\n\t\h\k\c\h\a\k\k\4\n\k\u\8\a\x\m\z\a\y\x\z\o\r\s\a\j\q\v\j\2\m\7\8\6\d\6\6\d\y\e\x\g\2\4\x\3\s\l\j\6\9\g\b\3\u\i\v\6\n\j\b\l\i\0\6\g\m\r\y\v\j\1\w\1\7\6\f\h\h\c\a\o\g\2\f\z\t\v\u\r\v\j\l\l\u\v\w\a\b\q\0\k\3\7\r\i\3\w\u\j\0\3\g\s\a\8\i\0\p\i\4\a\2\q\b\0\k\8\3\9\z\r\n\d\x\x\0\h\k\r\y\8\o\e\y\6\t\k\z\c\p\g\s\0\t\n\8\1\4\t\j\2\5\j\y\1\r\i\y\2\w\k\m\l\s\2\5\b\6\8\4\9\p\y\u\j\h\9\v\c\g\p\s\x\x\q\l\5\7\2\6\n\k\b\o\m\7\7\y\u\g\u\l\y\2\s\z\m\d\c\t\8\y\s\3\z\p\3\0\1\s\s\d\r\a\x\6\j\7\l\i\2\u\k\9\l\p\b\w\6\h\3\c\p\l\h\d\6\u\0\e\b\i\w\r\3\t\s\b\i\c\m\q\7\c\w\p ]] 00:08:20.758 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:20.758 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:21.017 [2024-12-16 01:29:51.469690] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:21.017 [2024-12-16 01:29:51.469793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75152 ] 00:08:21.017 [2024-12-16 01:29:51.616550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.017 [2024-12-16 01:29:51.636164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.017 [2024-12-16 01:29:51.666717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.276  [2024-12-16T01:29:51.934Z] Copying: 512/512 [B] (average 250 kBps) 00:08:21.276 00:08:21.276 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 3v4z957birhi94zsqu1nge9hrl43ix18a9vwuwpuiboihvoo6sl6lp1wvir5628jgbbz6wipwq6b4gzma2a732jvh5m1ruv69wgfqylt0z440fk3u00a7wzyrbj1utxcgqmg461yt5penf1rd56m2yvzu48xas7ngmgd4kamcctqvs9glkjqu1jozfrktd41xya0gpnhgs6xobiaowde3u3mk1nalbjqvafjvzqqpznthkchakk4nku8axmzayxzorsajqvj2m786d66dyexg24x3slj69gb3uiv6njbli06gmryvj1w176fhhcaog2fztvurvjlluvwabq0k37ri3wuj03gsa8i0pi4a2qb0k839zrndxx0hkry8oey6tkzcpgs0tn814tj25jy1riy2wkmls25b6849pyujh9vcgpsxxql5726nkbom77yuguly2szmdct8ys3zp301ssdrax6j7li2uk9lpbw6h3cplhd6u0ebiwr3tsbicmq7cwp == \3\v\4\z\9\5\7\b\i\r\h\i\9\4\z\s\q\u\1\n\g\e\9\h\r\l\4\3\i\x\1\8\a\9\v\w\u\w\p\u\i\b\o\i\h\v\o\o\6\s\l\6\l\p\1\w\v\i\r\5\6\2\8\j\g\b\b\z\6\w\i\p\w\q\6\b\4\g\z\m\a\2\a\7\3\2\j\v\h\5\m\1\r\u\v\6\9\w\g\f\q\y\l\t\0\z\4\4\0\f\k\3\u\0\0\a\7\w\z\y\r\b\j\1\u\t\x\c\g\q\m\g\4\6\1\y\t\5\p\e\n\f\1\r\d\5\6\m\2\y\v\z\u\4\8\x\a\s\7\n\g\m\g\d\4\k\a\m\c\c\t\q\v\s\9\g\l\k\j\q\u\1\j\o\z\f\r\k\t\d\4\1\x\y\a\0\g\p\n\h\g\s\6\x\o\b\i\a\o\w\d\e\3\u\3\m\k\1\n\a\l\b\j\q\v\a\f\j\v\z\q\q\p\z\n\t\h\k\c\h\a\k\k\4\n\k\u\8\a\x\m\z\a\y\x\z\o\r\s\a\j\q\v\j\2\m\7\8\6\d\6\6\d\y\e\x\g\2\4\x\3\s\l\j\6\9\g\b\3\u\i\v\6\n\j\b\l\i\0\6\g\m\r\y\v\j\1\w\1\7\6\f\h\h\c\a\o\g\2\f\z\t\v\u\r\v\j\l\l\u\v\w\a\b\q\0\k\3\7\r\i\3\w\u\j\0\3\g\s\a\8\i\0\p\i\4\a\2\q\b\0\k\8\3\9\z\r\n\d\x\x\0\h\k\r\y\8\o\e\y\6\t\k\z\c\p\g\s\0\t\n\8\1\4\t\j\2\5\j\y\1\r\i\y\2\w\k\m\l\s\2\5\b\6\8\4\9\p\y\u\j\h\9\v\c\g\p\s\x\x\q\l\5\7\2\6\n\k\b\o\m\7\7\y\u\g\u\l\y\2\s\z\m\d\c\t\8\y\s\3\z\p\3\0\1\s\s\d\r\a\x\6\j\7\l\i\2\u\k\9\l\p\b\w\6\h\3\c\p\l\h\d\6\u\0\e\b\i\w\r\3\t\s\b\i\c\m\q\7\c\w\p ]] 00:08:21.276 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:21.276 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:21.276 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:21.276 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:21.276 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.276 01:29:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:21.276 [2024-12-16 01:29:51.858628] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:21.276 [2024-12-16 01:29:51.858717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75167 ] 00:08:21.535 [2024-12-16 01:29:52.004465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.535 [2024-12-16 01:29:52.024176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.535 [2024-12-16 01:29:52.054391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.535  [2024-12-16T01:29:52.193Z] Copying: 512/512 [B] (average 500 kBps) 00:08:21.535 00:08:21.535 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lmn5mxf2nuhi23gf621b1jg27ef6bi9x98wopkjcmotk3mpmhlp9kjkhpbljo0z8behhex3mh7lhzhhf5vfrbyhgjex4fjg546zc53f5w9bttkmvbtul3verbs1q268mjjvp1dotyirj524hkx5p5e2ws5uajw9q2127f2zbz7pcdqez5ordecw1fo9y5hzyz81kfma7vdynvvp706taeswqc5j65ntxggkg1k1da6vc1yk5g0zxnkgj2raaucu8bwja9zt60poi6iym7y3fuqlwqnp9zxlhfhaf1mh0aglsnqcf40mh96r4f1hi7dssl1uezifstyezoh8csemwf32mgayae3pbyci8xxegrnl0hdqusfxpz5n1ldwafsc94r4l63t7iyn9pnzy5ronvv5j653vv1m1kpfsd0jzze8986c0oca6gdmzp4idbriw5oslamedyirrxf0uompq79ies2jq5cyheoe59rw0f40p60y6zno4ib9odmpxr3fi == \l\m\n\5\m\x\f\2\n\u\h\i\2\3\g\f\6\2\1\b\1\j\g\2\7\e\f\6\b\i\9\x\9\8\w\o\p\k\j\c\m\o\t\k\3\m\p\m\h\l\p\9\k\j\k\h\p\b\l\j\o\0\z\8\b\e\h\h\e\x\3\m\h\7\l\h\z\h\h\f\5\v\f\r\b\y\h\g\j\e\x\4\f\j\g\5\4\6\z\c\5\3\f\5\w\9\b\t\t\k\m\v\b\t\u\l\3\v\e\r\b\s\1\q\2\6\8\m\j\j\v\p\1\d\o\t\y\i\r\j\5\2\4\h\k\x\5\p\5\e\2\w\s\5\u\a\j\w\9\q\2\1\2\7\f\2\z\b\z\7\p\c\d\q\e\z\5\o\r\d\e\c\w\1\f\o\9\y\5\h\z\y\z\8\1\k\f\m\a\7\v\d\y\n\v\v\p\7\0\6\t\a\e\s\w\q\c\5\j\6\5\n\t\x\g\g\k\g\1\k\1\d\a\6\v\c\1\y\k\5\g\0\z\x\n\k\g\j\2\r\a\a\u\c\u\8\b\w\j\a\9\z\t\6\0\p\o\i\6\i\y\m\7\y\3\f\u\q\l\w\q\n\p\9\z\x\l\h\f\h\a\f\1\m\h\0\a\g\l\s\n\q\c\f\4\0\m\h\9\6\r\4\f\1\h\i\7\d\s\s\l\1\u\e\z\i\f\s\t\y\e\z\o\h\8\c\s\e\m\w\f\3\2\m\g\a\y\a\e\3\p\b\y\c\i\8\x\x\e\g\r\n\l\0\h\d\q\u\s\f\x\p\z\5\n\1\l\d\w\a\f\s\c\9\4\r\4\l\6\3\t\7\i\y\n\9\p\n\z\y\5\r\o\n\v\v\5\j\6\5\3\v\v\1\m\1\k\p\f\s\d\0\j\z\z\e\8\9\8\6\c\0\o\c\a\6\g\d\m\z\p\4\i\d\b\r\i\w\5\o\s\l\a\m\e\d\y\i\r\r\x\f\0\u\o\m\p\q\7\9\i\e\s\2\j\q\5\c\y\h\e\o\e\5\9\r\w\0\f\4\0\p\6\0\y\6\z\n\o\4\i\b\9\o\d\m\p\x\r\3\f\i ]] 00:08:21.535 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.535 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:21.794 [2024-12-16 01:29:52.238368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:21.794 [2024-12-16 01:29:52.239101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75171 ] 00:08:21.794 [2024-12-16 01:29:52.380669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.794 [2024-12-16 01:29:52.398645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.794 [2024-12-16 01:29:52.425572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.794  [2024-12-16T01:29:52.711Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.053 00:08:22.053 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lmn5mxf2nuhi23gf621b1jg27ef6bi9x98wopkjcmotk3mpmhlp9kjkhpbljo0z8behhex3mh7lhzhhf5vfrbyhgjex4fjg546zc53f5w9bttkmvbtul3verbs1q268mjjvp1dotyirj524hkx5p5e2ws5uajw9q2127f2zbz7pcdqez5ordecw1fo9y5hzyz81kfma7vdynvvp706taeswqc5j65ntxggkg1k1da6vc1yk5g0zxnkgj2raaucu8bwja9zt60poi6iym7y3fuqlwqnp9zxlhfhaf1mh0aglsnqcf40mh96r4f1hi7dssl1uezifstyezoh8csemwf32mgayae3pbyci8xxegrnl0hdqusfxpz5n1ldwafsc94r4l63t7iyn9pnzy5ronvv5j653vv1m1kpfsd0jzze8986c0oca6gdmzp4idbriw5oslamedyirrxf0uompq79ies2jq5cyheoe59rw0f40p60y6zno4ib9odmpxr3fi == \l\m\n\5\m\x\f\2\n\u\h\i\2\3\g\f\6\2\1\b\1\j\g\2\7\e\f\6\b\i\9\x\9\8\w\o\p\k\j\c\m\o\t\k\3\m\p\m\h\l\p\9\k\j\k\h\p\b\l\j\o\0\z\8\b\e\h\h\e\x\3\m\h\7\l\h\z\h\h\f\5\v\f\r\b\y\h\g\j\e\x\4\f\j\g\5\4\6\z\c\5\3\f\5\w\9\b\t\t\k\m\v\b\t\u\l\3\v\e\r\b\s\1\q\2\6\8\m\j\j\v\p\1\d\o\t\y\i\r\j\5\2\4\h\k\x\5\p\5\e\2\w\s\5\u\a\j\w\9\q\2\1\2\7\f\2\z\b\z\7\p\c\d\q\e\z\5\o\r\d\e\c\w\1\f\o\9\y\5\h\z\y\z\8\1\k\f\m\a\7\v\d\y\n\v\v\p\7\0\6\t\a\e\s\w\q\c\5\j\6\5\n\t\x\g\g\k\g\1\k\1\d\a\6\v\c\1\y\k\5\g\0\z\x\n\k\g\j\2\r\a\a\u\c\u\8\b\w\j\a\9\z\t\6\0\p\o\i\6\i\y\m\7\y\3\f\u\q\l\w\q\n\p\9\z\x\l\h\f\h\a\f\1\m\h\0\a\g\l\s\n\q\c\f\4\0\m\h\9\6\r\4\f\1\h\i\7\d\s\s\l\1\u\e\z\i\f\s\t\y\e\z\o\h\8\c\s\e\m\w\f\3\2\m\g\a\y\a\e\3\p\b\y\c\i\8\x\x\e\g\r\n\l\0\h\d\q\u\s\f\x\p\z\5\n\1\l\d\w\a\f\s\c\9\4\r\4\l\6\3\t\7\i\y\n\9\p\n\z\y\5\r\o\n\v\v\5\j\6\5\3\v\v\1\m\1\k\p\f\s\d\0\j\z\z\e\8\9\8\6\c\0\o\c\a\6\g\d\m\z\p\4\i\d\b\r\i\w\5\o\s\l\a\m\e\d\y\i\r\r\x\f\0\u\o\m\p\q\7\9\i\e\s\2\j\q\5\c\y\h\e\o\e\5\9\r\w\0\f\4\0\p\6\0\y\6\z\n\o\4\i\b\9\o\d\m\p\x\r\3\f\i ]] 00:08:22.053 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.053 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:22.053 [2024-12-16 01:29:52.610624] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:22.053 [2024-12-16 01:29:52.610719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75180 ] 00:08:22.312 [2024-12-16 01:29:52.756853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.312 [2024-12-16 01:29:52.775096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.312 [2024-12-16 01:29:52.802141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.312  [2024-12-16T01:29:52.970Z] Copying: 512/512 [B] (average 250 kBps) 00:08:22.312 00:08:22.312 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lmn5mxf2nuhi23gf621b1jg27ef6bi9x98wopkjcmotk3mpmhlp9kjkhpbljo0z8behhex3mh7lhzhhf5vfrbyhgjex4fjg546zc53f5w9bttkmvbtul3verbs1q268mjjvp1dotyirj524hkx5p5e2ws5uajw9q2127f2zbz7pcdqez5ordecw1fo9y5hzyz81kfma7vdynvvp706taeswqc5j65ntxggkg1k1da6vc1yk5g0zxnkgj2raaucu8bwja9zt60poi6iym7y3fuqlwqnp9zxlhfhaf1mh0aglsnqcf40mh96r4f1hi7dssl1uezifstyezoh8csemwf32mgayae3pbyci8xxegrnl0hdqusfxpz5n1ldwafsc94r4l63t7iyn9pnzy5ronvv5j653vv1m1kpfsd0jzze8986c0oca6gdmzp4idbriw5oslamedyirrxf0uompq79ies2jq5cyheoe59rw0f40p60y6zno4ib9odmpxr3fi == \l\m\n\5\m\x\f\2\n\u\h\i\2\3\g\f\6\2\1\b\1\j\g\2\7\e\f\6\b\i\9\x\9\8\w\o\p\k\j\c\m\o\t\k\3\m\p\m\h\l\p\9\k\j\k\h\p\b\l\j\o\0\z\8\b\e\h\h\e\x\3\m\h\7\l\h\z\h\h\f\5\v\f\r\b\y\h\g\j\e\x\4\f\j\g\5\4\6\z\c\5\3\f\5\w\9\b\t\t\k\m\v\b\t\u\l\3\v\e\r\b\s\1\q\2\6\8\m\j\j\v\p\1\d\o\t\y\i\r\j\5\2\4\h\k\x\5\p\5\e\2\w\s\5\u\a\j\w\9\q\2\1\2\7\f\2\z\b\z\7\p\c\d\q\e\z\5\o\r\d\e\c\w\1\f\o\9\y\5\h\z\y\z\8\1\k\f\m\a\7\v\d\y\n\v\v\p\7\0\6\t\a\e\s\w\q\c\5\j\6\5\n\t\x\g\g\k\g\1\k\1\d\a\6\v\c\1\y\k\5\g\0\z\x\n\k\g\j\2\r\a\a\u\c\u\8\b\w\j\a\9\z\t\6\0\p\o\i\6\i\y\m\7\y\3\f\u\q\l\w\q\n\p\9\z\x\l\h\f\h\a\f\1\m\h\0\a\g\l\s\n\q\c\f\4\0\m\h\9\6\r\4\f\1\h\i\7\d\s\s\l\1\u\e\z\i\f\s\t\y\e\z\o\h\8\c\s\e\m\w\f\3\2\m\g\a\y\a\e\3\p\b\y\c\i\8\x\x\e\g\r\n\l\0\h\d\q\u\s\f\x\p\z\5\n\1\l\d\w\a\f\s\c\9\4\r\4\l\6\3\t\7\i\y\n\9\p\n\z\y\5\r\o\n\v\v\5\j\6\5\3\v\v\1\m\1\k\p\f\s\d\0\j\z\z\e\8\9\8\6\c\0\o\c\a\6\g\d\m\z\p\4\i\d\b\r\i\w\5\o\s\l\a\m\e\d\y\i\r\r\x\f\0\u\o\m\p\q\7\9\i\e\s\2\j\q\5\c\y\h\e\o\e\5\9\r\w\0\f\4\0\p\6\0\y\6\z\n\o\4\i\b\9\o\d\m\p\x\r\3\f\i ]] 00:08:22.312 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.312 01:29:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:22.571 [2024-12-16 01:29:52.986337] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:22.571 [2024-12-16 01:29:52.986459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75190 ] 00:08:22.571 [2024-12-16 01:29:53.131791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.571 [2024-12-16 01:29:53.150910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.571 [2024-12-16 01:29:53.181554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.571  [2024-12-16T01:29:53.487Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.829 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ lmn5mxf2nuhi23gf621b1jg27ef6bi9x98wopkjcmotk3mpmhlp9kjkhpbljo0z8behhex3mh7lhzhhf5vfrbyhgjex4fjg546zc53f5w9bttkmvbtul3verbs1q268mjjvp1dotyirj524hkx5p5e2ws5uajw9q2127f2zbz7pcdqez5ordecw1fo9y5hzyz81kfma7vdynvvp706taeswqc5j65ntxggkg1k1da6vc1yk5g0zxnkgj2raaucu8bwja9zt60poi6iym7y3fuqlwqnp9zxlhfhaf1mh0aglsnqcf40mh96r4f1hi7dssl1uezifstyezoh8csemwf32mgayae3pbyci8xxegrnl0hdqusfxpz5n1ldwafsc94r4l63t7iyn9pnzy5ronvv5j653vv1m1kpfsd0jzze8986c0oca6gdmzp4idbriw5oslamedyirrxf0uompq79ies2jq5cyheoe59rw0f40p60y6zno4ib9odmpxr3fi == \l\m\n\5\m\x\f\2\n\u\h\i\2\3\g\f\6\2\1\b\1\j\g\2\7\e\f\6\b\i\9\x\9\8\w\o\p\k\j\c\m\o\t\k\3\m\p\m\h\l\p\9\k\j\k\h\p\b\l\j\o\0\z\8\b\e\h\h\e\x\3\m\h\7\l\h\z\h\h\f\5\v\f\r\b\y\h\g\j\e\x\4\f\j\g\5\4\6\z\c\5\3\f\5\w\9\b\t\t\k\m\v\b\t\u\l\3\v\e\r\b\s\1\q\2\6\8\m\j\j\v\p\1\d\o\t\y\i\r\j\5\2\4\h\k\x\5\p\5\e\2\w\s\5\u\a\j\w\9\q\2\1\2\7\f\2\z\b\z\7\p\c\d\q\e\z\5\o\r\d\e\c\w\1\f\o\9\y\5\h\z\y\z\8\1\k\f\m\a\7\v\d\y\n\v\v\p\7\0\6\t\a\e\s\w\q\c\5\j\6\5\n\t\x\g\g\k\g\1\k\1\d\a\6\v\c\1\y\k\5\g\0\z\x\n\k\g\j\2\r\a\a\u\c\u\8\b\w\j\a\9\z\t\6\0\p\o\i\6\i\y\m\7\y\3\f\u\q\l\w\q\n\p\9\z\x\l\h\f\h\a\f\1\m\h\0\a\g\l\s\n\q\c\f\4\0\m\h\9\6\r\4\f\1\h\i\7\d\s\s\l\1\u\e\z\i\f\s\t\y\e\z\o\h\8\c\s\e\m\w\f\3\2\m\g\a\y\a\e\3\p\b\y\c\i\8\x\x\e\g\r\n\l\0\h\d\q\u\s\f\x\p\z\5\n\1\l\d\w\a\f\s\c\9\4\r\4\l\6\3\t\7\i\y\n\9\p\n\z\y\5\r\o\n\v\v\5\j\6\5\3\v\v\1\m\1\k\p\f\s\d\0\j\z\z\e\8\9\8\6\c\0\o\c\a\6\g\d\m\z\p\4\i\d\b\r\i\w\5\o\s\l\a\m\e\d\y\i\r\r\x\f\0\u\o\m\p\q\7\9\i\e\s\2\j\q\5\c\y\h\e\o\e\5\9\r\w\0\f\4\0\p\6\0\y\6\z\n\o\4\i\b\9\o\d\m\p\x\r\3\f\i ]] 00:08:22.830 00:08:22.830 real 0m3.027s 00:08:22.830 user 0m1.455s 00:08:22.830 sys 0m1.353s 00:08:22.830 ************************************ 00:08:22.830 END TEST dd_flags_misc 00:08:22.830 ************************************ 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:22.830 * Second test run, disabling liburing, forcing AIO 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 ************************************ 00:08:22.830 START TEST dd_flag_append_forced_aio 00:08:22.830 ************************************ 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=edlgm6kwqusgbaguiw7dsv7icd4vylki 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=z70utp4nyhtxvvzqpb3y0926q8nk1p1c 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s edlgm6kwqusgbaguiw7dsv7icd4vylki 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s z70utp4nyhtxvvzqpb3y0926q8nk1p1c 00:08:22.830 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:22.830 [2024-12-16 01:29:53.424273] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:22.830 [2024-12-16 01:29:53.424376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75213 ] 00:08:23.088 [2024-12-16 01:29:53.570920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.088 [2024-12-16 01:29:53.589204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.088 [2024-12-16 01:29:53.616570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.088  [2024-12-16T01:29:54.005Z] Copying: 32/32 [B] (average 31 kBps) 00:08:23.347 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ z70utp4nyhtxvvzqpb3y0926q8nk1p1cedlgm6kwqusgbaguiw7dsv7icd4vylki == \z\7\0\u\t\p\4\n\y\h\t\x\v\v\z\q\p\b\3\y\0\9\2\6\q\8\n\k\1\p\1\c\e\d\l\g\m\6\k\w\q\u\s\g\b\a\g\u\i\w\7\d\s\v\7\i\c\d\4\v\y\l\k\i ]] 00:08:23.347 00:08:23.347 real 0m0.410s 00:08:23.347 user 0m0.188s 00:08:23.347 sys 0m0.096s 00:08:23.347 ************************************ 00:08:23.347 END TEST dd_flag_append_forced_aio 00:08:23.347 ************************************ 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:23.347 ************************************ 00:08:23.347 START TEST dd_flag_directory_forced_aio 00:08:23.347 ************************************ 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.347 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.348 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.348 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.348 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.348 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.348 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.348 01:29:53 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:23.348 [2024-12-16 01:29:53.882569] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.348 [2024-12-16 01:29:53.882660] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75245 ] 00:08:23.607 [2024-12-16 01:29:54.027798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.607 [2024-12-16 01:29:54.046428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.607 [2024-12-16 01:29:54.075679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.607 [2024-12-16 01:29:54.092934] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:23.607 [2024-12-16 01:29:54.093016] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:23.607 [2024-12-16 01:29:54.093044] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.607 [2024-12-16 01:29:54.153121] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.607 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:23.607 [2024-12-16 01:29:54.261613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:23.607 [2024-12-16 01:29:54.261728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75249 ] 00:08:23.866 [2024-12-16 01:29:54.404307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.866 [2024-12-16 01:29:54.422652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.866 [2024-12-16 01:29:54.449551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.866 [2024-12-16 01:29:54.465538] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:23.866 [2024-12-16 01:29:54.465905] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:23.866 [2024-12-16 01:29:54.466053] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.124 [2024-12-16 01:29:54.525772] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:24.124 ************************************ 00:08:24.124 END TEST dd_flag_directory_forced_aio 00:08:24.124 ************************************ 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.124 00:08:24.124 real 0m0.751s 00:08:24.124 user 0m0.366s 00:08:24.124 sys 0m0.177s 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:24.124 ************************************ 00:08:24.124 START TEST dd_flag_nofollow_forced_aio 00:08:24.124 ************************************ 00:08:24.124 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.125 01:29:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.125 [2024-12-16 01:29:54.692037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:24.125 [2024-12-16 01:29:54.692127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75277 ] 00:08:24.384 [2024-12-16 01:29:54.836958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.384 [2024-12-16 01:29:54.856956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.384 [2024-12-16 01:29:54.884570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.384 [2024-12-16 01:29:54.900887] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:24.384 [2024-12-16 01:29:54.900938] spdk_dd.c:1081:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:24.384 [2024-12-16 01:29:54.900967] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.384 [2024-12-16 01:29:54.959881] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:24.384 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:24.649 [2024-12-16 01:29:55.074825] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:24.649 [2024-12-16 01:29:55.074916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75287 ] 00:08:24.649 [2024-12-16 01:29:55.221825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.649 [2024-12-16 01:29:55.240257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.649 [2024-12-16 01:29:55.269569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.649 [2024-12-16 01:29:55.287209] spdk_dd.c: 892:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:24.649 [2024-12-16 01:29:55.287262] spdk_dd.c:1130:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:24.649 [2024-12-16 01:29:55.287276] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.910 [2024-12-16 01:29:55.348704] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:24.910 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:24.910 [2024-12-16 01:29:55.463862] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:24.910 [2024-12-16 01:29:55.464106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75289 ] 00:08:25.168 [2024-12-16 01:29:55.608932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.168 [2024-12-16 01:29:55.630680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.168 [2024-12-16 01:29:55.659999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.168  [2024-12-16T01:29:55.826Z] Copying: 512/512 [B] (average 500 kBps) 00:08:25.168 00:08:25.168 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ zxt5bee9yfxjt2p5mhaz5slb6ophqqn12818wnle9xfyulhiwimrjum6atjstdlek7ncowsgoxz4k99bt8hjcuqpmvtxswyrpbiiw6m7eb4szkh87634gq7awm3xhh9lwltmah0z2k6ax1x1i8paxtvbaoxtlyyyx4zl4ij2tymavhfaqor2vsa8mfgrgqd8ufkupxet83f413bu5x1gi87vyq7o50f7czbpusbbjfu1s7bzhf4iw4pozcn095rzpd51nvxqbwh3jm0l3xld8vznzdw376zdbitlnc86jznrvlweot1eerhggqrg3qrqm96poealeda759uwdabjvc7arpuq83ea67m71wlq9ucm3wm4qtd67cgzhlr6iq1lguxp9l2rfxmo1pv735wk30qxmyr7rgp467e9o71voh4sysoyh1vxihztrm4uqpoyikisziyedednaevhnrf26cgocwm8uvybdci6eovx4tre2yimb0rly7psbyvri5ym == \z\x\t\5\b\e\e\9\y\f\x\j\t\2\p\5\m\h\a\z\5\s\l\b\6\o\p\h\q\q\n\1\2\8\1\8\w\n\l\e\9\x\f\y\u\l\h\i\w\i\m\r\j\u\m\6\a\t\j\s\t\d\l\e\k\7\n\c\o\w\s\g\o\x\z\4\k\9\9\b\t\8\h\j\c\u\q\p\m\v\t\x\s\w\y\r\p\b\i\i\w\6\m\7\e\b\4\s\z\k\h\8\7\6\3\4\g\q\7\a\w\m\3\x\h\h\9\l\w\l\t\m\a\h\0\z\2\k\6\a\x\1\x\1\i\8\p\a\x\t\v\b\a\o\x\t\l\y\y\y\x\4\z\l\4\i\j\2\t\y\m\a\v\h\f\a\q\o\r\2\v\s\a\8\m\f\g\r\g\q\d\8\u\f\k\u\p\x\e\t\8\3\f\4\1\3\b\u\5\x\1\g\i\8\7\v\y\q\7\o\5\0\f\7\c\z\b\p\u\s\b\b\j\f\u\1\s\7\b\z\h\f\4\i\w\4\p\o\z\c\n\0\9\5\r\z\p\d\5\1\n\v\x\q\b\w\h\3\j\m\0\l\3\x\l\d\8\v\z\n\z\d\w\3\7\6\z\d\b\i\t\l\n\c\8\6\j\z\n\r\v\l\w\e\o\t\1\e\e\r\h\g\g\q\r\g\3\q\r\q\m\9\6\p\o\e\a\l\e\d\a\7\5\9\u\w\d\a\b\j\v\c\7\a\r\p\u\q\8\3\e\a\6\7\m\7\1\w\l\q\9\u\c\m\3\w\m\4\q\t\d\6\7\c\g\z\h\l\r\6\i\q\1\l\g\u\x\p\9\l\2\r\f\x\m\o\1\p\v\7\3\5\w\k\3\0\q\x\m\y\r\7\r\g\p\4\6\7\e\9\o\7\1\v\o\h\4\s\y\s\o\y\h\1\v\x\i\h\z\t\r\m\4\u\q\p\o\y\i\k\i\s\z\i\y\e\d\e\d\n\a\e\v\h\n\r\f\2\6\c\g\o\c\w\m\8\u\v\y\b\d\c\i\6\e\o\v\x\4\t\r\e\2\y\i\m\b\0\r\l\y\7\p\s\b\y\v\r\i\5\y\m ]] 00:08:25.168 00:08:25.168 real 0m1.192s 00:08:25.168 user 0m0.565s 00:08:25.168 sys 0m0.291s 00:08:25.168 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.168 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:25.168 ************************************ 00:08:25.168 END TEST dd_flag_nofollow_forced_aio 00:08:25.168 ************************************ 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:25.428 ************************************ 00:08:25.428 START TEST dd_flag_noatime_forced_aio 00:08:25.428 ************************************ 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1734312595 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1734312595 00:08:25.428 01:29:55 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:26.364 01:29:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.364 [2024-12-16 01:29:56.943546] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:26.364 [2024-12-16 01:29:56.943636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75335 ] 00:08:26.623 [2024-12-16 01:29:57.097038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.623 [2024-12-16 01:29:57.121247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.623 [2024-12-16 01:29:57.155378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.623  [2024-12-16T01:29:57.539Z] Copying: 512/512 [B] (average 500 kBps) 00:08:26.881 00:08:26.881 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.881 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1734312595 )) 00:08:26.881 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.881 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1734312595 )) 00:08:26.881 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.881 [2024-12-16 01:29:57.376289] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:26.881 [2024-12-16 01:29:57.376381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75341 ] 00:08:26.881 [2024-12-16 01:29:57.521421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.140 [2024-12-16 01:29:57.541023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.140 [2024-12-16 01:29:57.570014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.140  [2024-12-16T01:29:57.798Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.140 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1734312597 )) 00:08:27.140 00:08:27.140 real 0m1.858s 00:08:27.140 user 0m0.405s 00:08:27.140 sys 0m0.209s 00:08:27.140 ************************************ 00:08:27.140 END TEST dd_flag_noatime_forced_aio 00:08:27.140 ************************************ 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:27.140 ************************************ 00:08:27.140 START TEST dd_flags_misc_forced_aio 00:08:27.140 ************************************ 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.140 01:29:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:27.421 [2024-12-16 01:29:57.844989] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:27.421 [2024-12-16 01:29:57.845220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75373 ] 00:08:27.421 [2024-12-16 01:29:57.988016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.421 [2024-12-16 01:29:58.006564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.421 [2024-12-16 01:29:58.033616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.421  [2024-12-16T01:29:58.353Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.695 00:08:27.695 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v9g3td46rgg1kerwvrysrtw26xa89rurmud1l5d9cg5hjy2rzcrwld7kio9208obo8gsighn1t6ahy7j7hurqwqt08fg9zn108hum3eig3a56fjo60l18vzyqnf596qej7bzuyt0anwklv6opw6pt4asxej12mjie8hz8bjjyecrwa0237uokjlwsgk2biobvk2gcz0h23gs9zsdklk0wzmj1djrmgzl2sebeo9oniresxyj3j7dihyod1cz9ehj715ey2skbv2o1kvm9hhmhzvgcvvtcyf7qc1sivhoxgy10n10n1mmwefkdp915bwk7mut927ctoo9nulbh2zj5wu4tdqarshf9w8ft21fifz6ngjekmfz5bfqs8eb21eaoi0uegv8r691vbuphnezg36vg2u4htjieo5qx810sywzxi2ydd0pxxilnkzfhs8qmsrg0u62beg4ac7gclq3923d7o4h113fhpws9z50nxftn22w02e4x5prsvxxg1n7 == \v\9\g\3\t\d\4\6\r\g\g\1\k\e\r\w\v\r\y\s\r\t\w\2\6\x\a\8\9\r\u\r\m\u\d\1\l\5\d\9\c\g\5\h\j\y\2\r\z\c\r\w\l\d\7\k\i\o\9\2\0\8\o\b\o\8\g\s\i\g\h\n\1\t\6\a\h\y\7\j\7\h\u\r\q\w\q\t\0\8\f\g\9\z\n\1\0\8\h\u\m\3\e\i\g\3\a\5\6\f\j\o\6\0\l\1\8\v\z\y\q\n\f\5\9\6\q\e\j\7\b\z\u\y\t\0\a\n\w\k\l\v\6\o\p\w\6\p\t\4\a\s\x\e\j\1\2\m\j\i\e\8\h\z\8\b\j\j\y\e\c\r\w\a\0\2\3\7\u\o\k\j\l\w\s\g\k\2\b\i\o\b\v\k\2\g\c\z\0\h\2\3\g\s\9\z\s\d\k\l\k\0\w\z\m\j\1\d\j\r\m\g\z\l\2\s\e\b\e\o\9\o\n\i\r\e\s\x\y\j\3\j\7\d\i\h\y\o\d\1\c\z\9\e\h\j\7\1\5\e\y\2\s\k\b\v\2\o\1\k\v\m\9\h\h\m\h\z\v\g\c\v\v\t\c\y\f\7\q\c\1\s\i\v\h\o\x\g\y\1\0\n\1\0\n\1\m\m\w\e\f\k\d\p\9\1\5\b\w\k\7\m\u\t\9\2\7\c\t\o\o\9\n\u\l\b\h\2\z\j\5\w\u\4\t\d\q\a\r\s\h\f\9\w\8\f\t\2\1\f\i\f\z\6\n\g\j\e\k\m\f\z\5\b\f\q\s\8\e\b\2\1\e\a\o\i\0\u\e\g\v\8\r\6\9\1\v\b\u\p\h\n\e\z\g\3\6\v\g\2\u\4\h\t\j\i\e\o\5\q\x\8\1\0\s\y\w\z\x\i\2\y\d\d\0\p\x\x\i\l\n\k\z\f\h\s\8\q\m\s\r\g\0\u\6\2\b\e\g\4\a\c\7\g\c\l\q\3\9\2\3\d\7\o\4\h\1\1\3\f\h\p\w\s\9\z\5\0\n\x\f\t\n\2\2\w\0\2\e\4\x\5\p\r\s\v\x\x\g\1\n\7 ]] 00:08:27.695 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.695 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:27.695 [2024-12-16 01:29:58.228979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:27.695 [2024-12-16 01:29:58.229073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75375 ] 00:08:27.954 [2024-12-16 01:29:58.375389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.954 [2024-12-16 01:29:58.393428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.954 [2024-12-16 01:29:58.422233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.954  [2024-12-16T01:29:58.612Z] Copying: 512/512 [B] (average 500 kBps) 00:08:27.954 00:08:27.954 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v9g3td46rgg1kerwvrysrtw26xa89rurmud1l5d9cg5hjy2rzcrwld7kio9208obo8gsighn1t6ahy7j7hurqwqt08fg9zn108hum3eig3a56fjo60l18vzyqnf596qej7bzuyt0anwklv6opw6pt4asxej12mjie8hz8bjjyecrwa0237uokjlwsgk2biobvk2gcz0h23gs9zsdklk0wzmj1djrmgzl2sebeo9oniresxyj3j7dihyod1cz9ehj715ey2skbv2o1kvm9hhmhzvgcvvtcyf7qc1sivhoxgy10n10n1mmwefkdp915bwk7mut927ctoo9nulbh2zj5wu4tdqarshf9w8ft21fifz6ngjekmfz5bfqs8eb21eaoi0uegv8r691vbuphnezg36vg2u4htjieo5qx810sywzxi2ydd0pxxilnkzfhs8qmsrg0u62beg4ac7gclq3923d7o4h113fhpws9z50nxftn22w02e4x5prsvxxg1n7 == \v\9\g\3\t\d\4\6\r\g\g\1\k\e\r\w\v\r\y\s\r\t\w\2\6\x\a\8\9\r\u\r\m\u\d\1\l\5\d\9\c\g\5\h\j\y\2\r\z\c\r\w\l\d\7\k\i\o\9\2\0\8\o\b\o\8\g\s\i\g\h\n\1\t\6\a\h\y\7\j\7\h\u\r\q\w\q\t\0\8\f\g\9\z\n\1\0\8\h\u\m\3\e\i\g\3\a\5\6\f\j\o\6\0\l\1\8\v\z\y\q\n\f\5\9\6\q\e\j\7\b\z\u\y\t\0\a\n\w\k\l\v\6\o\p\w\6\p\t\4\a\s\x\e\j\1\2\m\j\i\e\8\h\z\8\b\j\j\y\e\c\r\w\a\0\2\3\7\u\o\k\j\l\w\s\g\k\2\b\i\o\b\v\k\2\g\c\z\0\h\2\3\g\s\9\z\s\d\k\l\k\0\w\z\m\j\1\d\j\r\m\g\z\l\2\s\e\b\e\o\9\o\n\i\r\e\s\x\y\j\3\j\7\d\i\h\y\o\d\1\c\z\9\e\h\j\7\1\5\e\y\2\s\k\b\v\2\o\1\k\v\m\9\h\h\m\h\z\v\g\c\v\v\t\c\y\f\7\q\c\1\s\i\v\h\o\x\g\y\1\0\n\1\0\n\1\m\m\w\e\f\k\d\p\9\1\5\b\w\k\7\m\u\t\9\2\7\c\t\o\o\9\n\u\l\b\h\2\z\j\5\w\u\4\t\d\q\a\r\s\h\f\9\w\8\f\t\2\1\f\i\f\z\6\n\g\j\e\k\m\f\z\5\b\f\q\s\8\e\b\2\1\e\a\o\i\0\u\e\g\v\8\r\6\9\1\v\b\u\p\h\n\e\z\g\3\6\v\g\2\u\4\h\t\j\i\e\o\5\q\x\8\1\0\s\y\w\z\x\i\2\y\d\d\0\p\x\x\i\l\n\k\z\f\h\s\8\q\m\s\r\g\0\u\6\2\b\e\g\4\a\c\7\g\c\l\q\3\9\2\3\d\7\o\4\h\1\1\3\f\h\p\w\s\9\z\5\0\n\x\f\t\n\2\2\w\0\2\e\4\x\5\p\r\s\v\x\x\g\1\n\7 ]] 00:08:27.954 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:27.954 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:28.213 [2024-12-16 01:29:58.633157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:28.213 [2024-12-16 01:29:58.633252] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75377 ] 00:08:28.213 [2024-12-16 01:29:58.777548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.213 [2024-12-16 01:29:58.798777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.213 [2024-12-16 01:29:58.829014] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.213  [2024-12-16T01:29:59.130Z] Copying: 512/512 [B] (average 125 kBps) 00:08:28.472 00:08:28.472 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v9g3td46rgg1kerwvrysrtw26xa89rurmud1l5d9cg5hjy2rzcrwld7kio9208obo8gsighn1t6ahy7j7hurqwqt08fg9zn108hum3eig3a56fjo60l18vzyqnf596qej7bzuyt0anwklv6opw6pt4asxej12mjie8hz8bjjyecrwa0237uokjlwsgk2biobvk2gcz0h23gs9zsdklk0wzmj1djrmgzl2sebeo9oniresxyj3j7dihyod1cz9ehj715ey2skbv2o1kvm9hhmhzvgcvvtcyf7qc1sivhoxgy10n10n1mmwefkdp915bwk7mut927ctoo9nulbh2zj5wu4tdqarshf9w8ft21fifz6ngjekmfz5bfqs8eb21eaoi0uegv8r691vbuphnezg36vg2u4htjieo5qx810sywzxi2ydd0pxxilnkzfhs8qmsrg0u62beg4ac7gclq3923d7o4h113fhpws9z50nxftn22w02e4x5prsvxxg1n7 == \v\9\g\3\t\d\4\6\r\g\g\1\k\e\r\w\v\r\y\s\r\t\w\2\6\x\a\8\9\r\u\r\m\u\d\1\l\5\d\9\c\g\5\h\j\y\2\r\z\c\r\w\l\d\7\k\i\o\9\2\0\8\o\b\o\8\g\s\i\g\h\n\1\t\6\a\h\y\7\j\7\h\u\r\q\w\q\t\0\8\f\g\9\z\n\1\0\8\h\u\m\3\e\i\g\3\a\5\6\f\j\o\6\0\l\1\8\v\z\y\q\n\f\5\9\6\q\e\j\7\b\z\u\y\t\0\a\n\w\k\l\v\6\o\p\w\6\p\t\4\a\s\x\e\j\1\2\m\j\i\e\8\h\z\8\b\j\j\y\e\c\r\w\a\0\2\3\7\u\o\k\j\l\w\s\g\k\2\b\i\o\b\v\k\2\g\c\z\0\h\2\3\g\s\9\z\s\d\k\l\k\0\w\z\m\j\1\d\j\r\m\g\z\l\2\s\e\b\e\o\9\o\n\i\r\e\s\x\y\j\3\j\7\d\i\h\y\o\d\1\c\z\9\e\h\j\7\1\5\e\y\2\s\k\b\v\2\o\1\k\v\m\9\h\h\m\h\z\v\g\c\v\v\t\c\y\f\7\q\c\1\s\i\v\h\o\x\g\y\1\0\n\1\0\n\1\m\m\w\e\f\k\d\p\9\1\5\b\w\k\7\m\u\t\9\2\7\c\t\o\o\9\n\u\l\b\h\2\z\j\5\w\u\4\t\d\q\a\r\s\h\f\9\w\8\f\t\2\1\f\i\f\z\6\n\g\j\e\k\m\f\z\5\b\f\q\s\8\e\b\2\1\e\a\o\i\0\u\e\g\v\8\r\6\9\1\v\b\u\p\h\n\e\z\g\3\6\v\g\2\u\4\h\t\j\i\e\o\5\q\x\8\1\0\s\y\w\z\x\i\2\y\d\d\0\p\x\x\i\l\n\k\z\f\h\s\8\q\m\s\r\g\0\u\6\2\b\e\g\4\a\c\7\g\c\l\q\3\9\2\3\d\7\o\4\h\1\1\3\f\h\p\w\s\9\z\5\0\n\x\f\t\n\2\2\w\0\2\e\4\x\5\p\r\s\v\x\x\g\1\n\7 ]] 00:08:28.472 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.472 01:29:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:28.472 [2024-12-16 01:29:59.036726] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:28.472 [2024-12-16 01:29:59.036820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75390 ] 00:08:28.731 [2024-12-16 01:29:59.182339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.731 [2024-12-16 01:29:59.204054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.731 [2024-12-16 01:29:59.232450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.731  [2024-12-16T01:29:59.389Z] Copying: 512/512 [B] (average 500 kBps) 00:08:28.731 00:08:28.731 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v9g3td46rgg1kerwvrysrtw26xa89rurmud1l5d9cg5hjy2rzcrwld7kio9208obo8gsighn1t6ahy7j7hurqwqt08fg9zn108hum3eig3a56fjo60l18vzyqnf596qej7bzuyt0anwklv6opw6pt4asxej12mjie8hz8bjjyecrwa0237uokjlwsgk2biobvk2gcz0h23gs9zsdklk0wzmj1djrmgzl2sebeo9oniresxyj3j7dihyod1cz9ehj715ey2skbv2o1kvm9hhmhzvgcvvtcyf7qc1sivhoxgy10n10n1mmwefkdp915bwk7mut927ctoo9nulbh2zj5wu4tdqarshf9w8ft21fifz6ngjekmfz5bfqs8eb21eaoi0uegv8r691vbuphnezg36vg2u4htjieo5qx810sywzxi2ydd0pxxilnkzfhs8qmsrg0u62beg4ac7gclq3923d7o4h113fhpws9z50nxftn22w02e4x5prsvxxg1n7 == \v\9\g\3\t\d\4\6\r\g\g\1\k\e\r\w\v\r\y\s\r\t\w\2\6\x\a\8\9\r\u\r\m\u\d\1\l\5\d\9\c\g\5\h\j\y\2\r\z\c\r\w\l\d\7\k\i\o\9\2\0\8\o\b\o\8\g\s\i\g\h\n\1\t\6\a\h\y\7\j\7\h\u\r\q\w\q\t\0\8\f\g\9\z\n\1\0\8\h\u\m\3\e\i\g\3\a\5\6\f\j\o\6\0\l\1\8\v\z\y\q\n\f\5\9\6\q\e\j\7\b\z\u\y\t\0\a\n\w\k\l\v\6\o\p\w\6\p\t\4\a\s\x\e\j\1\2\m\j\i\e\8\h\z\8\b\j\j\y\e\c\r\w\a\0\2\3\7\u\o\k\j\l\w\s\g\k\2\b\i\o\b\v\k\2\g\c\z\0\h\2\3\g\s\9\z\s\d\k\l\k\0\w\z\m\j\1\d\j\r\m\g\z\l\2\s\e\b\e\o\9\o\n\i\r\e\s\x\y\j\3\j\7\d\i\h\y\o\d\1\c\z\9\e\h\j\7\1\5\e\y\2\s\k\b\v\2\o\1\k\v\m\9\h\h\m\h\z\v\g\c\v\v\t\c\y\f\7\q\c\1\s\i\v\h\o\x\g\y\1\0\n\1\0\n\1\m\m\w\e\f\k\d\p\9\1\5\b\w\k\7\m\u\t\9\2\7\c\t\o\o\9\n\u\l\b\h\2\z\j\5\w\u\4\t\d\q\a\r\s\h\f\9\w\8\f\t\2\1\f\i\f\z\6\n\g\j\e\k\m\f\z\5\b\f\q\s\8\e\b\2\1\e\a\o\i\0\u\e\g\v\8\r\6\9\1\v\b\u\p\h\n\e\z\g\3\6\v\g\2\u\4\h\t\j\i\e\o\5\q\x\8\1\0\s\y\w\z\x\i\2\y\d\d\0\p\x\x\i\l\n\k\z\f\h\s\8\q\m\s\r\g\0\u\6\2\b\e\g\4\a\c\7\g\c\l\q\3\9\2\3\d\7\o\4\h\1\1\3\f\h\p\w\s\9\z\5\0\n\x\f\t\n\2\2\w\0\2\e\4\x\5\p\r\s\v\x\x\g\1\n\7 ]] 00:08:28.731 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:28.731 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:28.731 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:28.731 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:28.990 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:28.990 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:28.990 [2024-12-16 01:29:59.452892] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:28.990 [2024-12-16 01:29:59.453147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75392 ] 00:08:28.990 [2024-12-16 01:29:59.601010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.990 [2024-12-16 01:29:59.620005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.249 [2024-12-16 01:29:59.648823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.249  [2024-12-16T01:29:59.907Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.249 00:08:29.249 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uzdpw1xf54os5w3johzhv3fkhmdbf24os6d7afeb03ri06wxjrhrqw8gtzkkmyg805atxaqxf1pfc84v0pe5lrw1d01dopbooq9byvf6prmp6k9dqukrt3xsu1pwgvgze28uzli2ztixzarm81d7w995vnjfjr0pwgtb5mzcac2scwt8do2etfonhd7jrgw3192917gxgpj1fmci8yp6xwrb68m3t3gu12z6btq1pj47snp2g08my0h4n266qh6je0q9ax7nrsylxmu1i21jz5ftk29sipnuhgitkf644bdn5kc6dtmv39ri6i9vwi2bvju8paa0po1286otsj4gztt2didh5oh9h97rc07sucoyfy4iu4kxotonrw0r1u1taq6wfcavnkyoyscler3h7e4ctiq6rwzaqlz729hvwvvckl0rw9xiuxt1c1csg3aqpgg1liteph53nx4vi0ebqc54duwsxugc969ir8guc7c3deli1nnja0wtovaov7ea == \u\z\d\p\w\1\x\f\5\4\o\s\5\w\3\j\o\h\z\h\v\3\f\k\h\m\d\b\f\2\4\o\s\6\d\7\a\f\e\b\0\3\r\i\0\6\w\x\j\r\h\r\q\w\8\g\t\z\k\k\m\y\g\8\0\5\a\t\x\a\q\x\f\1\p\f\c\8\4\v\0\p\e\5\l\r\w\1\d\0\1\d\o\p\b\o\o\q\9\b\y\v\f\6\p\r\m\p\6\k\9\d\q\u\k\r\t\3\x\s\u\1\p\w\g\v\g\z\e\2\8\u\z\l\i\2\z\t\i\x\z\a\r\m\8\1\d\7\w\9\9\5\v\n\j\f\j\r\0\p\w\g\t\b\5\m\z\c\a\c\2\s\c\w\t\8\d\o\2\e\t\f\o\n\h\d\7\j\r\g\w\3\1\9\2\9\1\7\g\x\g\p\j\1\f\m\c\i\8\y\p\6\x\w\r\b\6\8\m\3\t\3\g\u\1\2\z\6\b\t\q\1\p\j\4\7\s\n\p\2\g\0\8\m\y\0\h\4\n\2\6\6\q\h\6\j\e\0\q\9\a\x\7\n\r\s\y\l\x\m\u\1\i\2\1\j\z\5\f\t\k\2\9\s\i\p\n\u\h\g\i\t\k\f\6\4\4\b\d\n\5\k\c\6\d\t\m\v\3\9\r\i\6\i\9\v\w\i\2\b\v\j\u\8\p\a\a\0\p\o\1\2\8\6\o\t\s\j\4\g\z\t\t\2\d\i\d\h\5\o\h\9\h\9\7\r\c\0\7\s\u\c\o\y\f\y\4\i\u\4\k\x\o\t\o\n\r\w\0\r\1\u\1\t\a\q\6\w\f\c\a\v\n\k\y\o\y\s\c\l\e\r\3\h\7\e\4\c\t\i\q\6\r\w\z\a\q\l\z\7\2\9\h\v\w\v\v\c\k\l\0\r\w\9\x\i\u\x\t\1\c\1\c\s\g\3\a\q\p\g\g\1\l\i\t\e\p\h\5\3\n\x\4\v\i\0\e\b\q\c\5\4\d\u\w\s\x\u\g\c\9\6\9\i\r\8\g\u\c\7\c\3\d\e\l\i\1\n\n\j\a\0\w\t\o\v\a\o\v\7\e\a ]] 00:08:29.249 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.249 01:29:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:29.249 [2024-12-16 01:29:59.854604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:29.249 [2024-12-16 01:29:59.854694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75398 ] 00:08:29.508 [2024-12-16 01:30:00.002808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.508 [2024-12-16 01:30:00.024446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.508 [2024-12-16 01:30:00.052872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.508  [2024-12-16T01:30:00.425Z] Copying: 512/512 [B] (average 500 kBps) 00:08:29.767 00:08:29.767 01:30:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uzdpw1xf54os5w3johzhv3fkhmdbf24os6d7afeb03ri06wxjrhrqw8gtzkkmyg805atxaqxf1pfc84v0pe5lrw1d01dopbooq9byvf6prmp6k9dqukrt3xsu1pwgvgze28uzli2ztixzarm81d7w995vnjfjr0pwgtb5mzcac2scwt8do2etfonhd7jrgw3192917gxgpj1fmci8yp6xwrb68m3t3gu12z6btq1pj47snp2g08my0h4n266qh6je0q9ax7nrsylxmu1i21jz5ftk29sipnuhgitkf644bdn5kc6dtmv39ri6i9vwi2bvju8paa0po1286otsj4gztt2didh5oh9h97rc07sucoyfy4iu4kxotonrw0r1u1taq6wfcavnkyoyscler3h7e4ctiq6rwzaqlz729hvwvvckl0rw9xiuxt1c1csg3aqpgg1liteph53nx4vi0ebqc54duwsxugc969ir8guc7c3deli1nnja0wtovaov7ea == \u\z\d\p\w\1\x\f\5\4\o\s\5\w\3\j\o\h\z\h\v\3\f\k\h\m\d\b\f\2\4\o\s\6\d\7\a\f\e\b\0\3\r\i\0\6\w\x\j\r\h\r\q\w\8\g\t\z\k\k\m\y\g\8\0\5\a\t\x\a\q\x\f\1\p\f\c\8\4\v\0\p\e\5\l\r\w\1\d\0\1\d\o\p\b\o\o\q\9\b\y\v\f\6\p\r\m\p\6\k\9\d\q\u\k\r\t\3\x\s\u\1\p\w\g\v\g\z\e\2\8\u\z\l\i\2\z\t\i\x\z\a\r\m\8\1\d\7\w\9\9\5\v\n\j\f\j\r\0\p\w\g\t\b\5\m\z\c\a\c\2\s\c\w\t\8\d\o\2\e\t\f\o\n\h\d\7\j\r\g\w\3\1\9\2\9\1\7\g\x\g\p\j\1\f\m\c\i\8\y\p\6\x\w\r\b\6\8\m\3\t\3\g\u\1\2\z\6\b\t\q\1\p\j\4\7\s\n\p\2\g\0\8\m\y\0\h\4\n\2\6\6\q\h\6\j\e\0\q\9\a\x\7\n\r\s\y\l\x\m\u\1\i\2\1\j\z\5\f\t\k\2\9\s\i\p\n\u\h\g\i\t\k\f\6\4\4\b\d\n\5\k\c\6\d\t\m\v\3\9\r\i\6\i\9\v\w\i\2\b\v\j\u\8\p\a\a\0\p\o\1\2\8\6\o\t\s\j\4\g\z\t\t\2\d\i\d\h\5\o\h\9\h\9\7\r\c\0\7\s\u\c\o\y\f\y\4\i\u\4\k\x\o\t\o\n\r\w\0\r\1\u\1\t\a\q\6\w\f\c\a\v\n\k\y\o\y\s\c\l\e\r\3\h\7\e\4\c\t\i\q\6\r\w\z\a\q\l\z\7\2\9\h\v\w\v\v\c\k\l\0\r\w\9\x\i\u\x\t\1\c\1\c\s\g\3\a\q\p\g\g\1\l\i\t\e\p\h\5\3\n\x\4\v\i\0\e\b\q\c\5\4\d\u\w\s\x\u\g\c\9\6\9\i\r\8\g\u\c\7\c\3\d\e\l\i\1\n\n\j\a\0\w\t\o\v\a\o\v\7\e\a ]] 00:08:29.767 01:30:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:29.767 01:30:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:29.767 [2024-12-16 01:30:00.258959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:29.767 [2024-12-16 01:30:00.259255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75407 ] 00:08:29.767 [2024-12-16 01:30:00.409508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.026 [2024-12-16 01:30:00.429906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.026 [2024-12-16 01:30:00.458639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.026  [2024-12-16T01:30:00.684Z] Copying: 512/512 [B] (average 500 kBps) 00:08:30.026 00:08:30.026 01:30:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uzdpw1xf54os5w3johzhv3fkhmdbf24os6d7afeb03ri06wxjrhrqw8gtzkkmyg805atxaqxf1pfc84v0pe5lrw1d01dopbooq9byvf6prmp6k9dqukrt3xsu1pwgvgze28uzli2ztixzarm81d7w995vnjfjr0pwgtb5mzcac2scwt8do2etfonhd7jrgw3192917gxgpj1fmci8yp6xwrb68m3t3gu12z6btq1pj47snp2g08my0h4n266qh6je0q9ax7nrsylxmu1i21jz5ftk29sipnuhgitkf644bdn5kc6dtmv39ri6i9vwi2bvju8paa0po1286otsj4gztt2didh5oh9h97rc07sucoyfy4iu4kxotonrw0r1u1taq6wfcavnkyoyscler3h7e4ctiq6rwzaqlz729hvwvvckl0rw9xiuxt1c1csg3aqpgg1liteph53nx4vi0ebqc54duwsxugc969ir8guc7c3deli1nnja0wtovaov7ea == \u\z\d\p\w\1\x\f\5\4\o\s\5\w\3\j\o\h\z\h\v\3\f\k\h\m\d\b\f\2\4\o\s\6\d\7\a\f\e\b\0\3\r\i\0\6\w\x\j\r\h\r\q\w\8\g\t\z\k\k\m\y\g\8\0\5\a\t\x\a\q\x\f\1\p\f\c\8\4\v\0\p\e\5\l\r\w\1\d\0\1\d\o\p\b\o\o\q\9\b\y\v\f\6\p\r\m\p\6\k\9\d\q\u\k\r\t\3\x\s\u\1\p\w\g\v\g\z\e\2\8\u\z\l\i\2\z\t\i\x\z\a\r\m\8\1\d\7\w\9\9\5\v\n\j\f\j\r\0\p\w\g\t\b\5\m\z\c\a\c\2\s\c\w\t\8\d\o\2\e\t\f\o\n\h\d\7\j\r\g\w\3\1\9\2\9\1\7\g\x\g\p\j\1\f\m\c\i\8\y\p\6\x\w\r\b\6\8\m\3\t\3\g\u\1\2\z\6\b\t\q\1\p\j\4\7\s\n\p\2\g\0\8\m\y\0\h\4\n\2\6\6\q\h\6\j\e\0\q\9\a\x\7\n\r\s\y\l\x\m\u\1\i\2\1\j\z\5\f\t\k\2\9\s\i\p\n\u\h\g\i\t\k\f\6\4\4\b\d\n\5\k\c\6\d\t\m\v\3\9\r\i\6\i\9\v\w\i\2\b\v\j\u\8\p\a\a\0\p\o\1\2\8\6\o\t\s\j\4\g\z\t\t\2\d\i\d\h\5\o\h\9\h\9\7\r\c\0\7\s\u\c\o\y\f\y\4\i\u\4\k\x\o\t\o\n\r\w\0\r\1\u\1\t\a\q\6\w\f\c\a\v\n\k\y\o\y\s\c\l\e\r\3\h\7\e\4\c\t\i\q\6\r\w\z\a\q\l\z\7\2\9\h\v\w\v\v\c\k\l\0\r\w\9\x\i\u\x\t\1\c\1\c\s\g\3\a\q\p\g\g\1\l\i\t\e\p\h\5\3\n\x\4\v\i\0\e\b\q\c\5\4\d\u\w\s\x\u\g\c\9\6\9\i\r\8\g\u\c\7\c\3\d\e\l\i\1\n\n\j\a\0\w\t\o\v\a\o\v\7\e\a ]] 00:08:30.026 01:30:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.026 01:30:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:30.026 [2024-12-16 01:30:00.664079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:30.026 [2024-12-16 01:30:00.664180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:08:30.284 [2024-12-16 01:30:00.810427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.284 [2024-12-16 01:30:00.830862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.284 [2024-12-16 01:30:00.861994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.284  [2024-12-16T01:30:01.201Z] Copying: 512/512 [B] (average 500 kBps) 00:08:30.543 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uzdpw1xf54os5w3johzhv3fkhmdbf24os6d7afeb03ri06wxjrhrqw8gtzkkmyg805atxaqxf1pfc84v0pe5lrw1d01dopbooq9byvf6prmp6k9dqukrt3xsu1pwgvgze28uzli2ztixzarm81d7w995vnjfjr0pwgtb5mzcac2scwt8do2etfonhd7jrgw3192917gxgpj1fmci8yp6xwrb68m3t3gu12z6btq1pj47snp2g08my0h4n266qh6je0q9ax7nrsylxmu1i21jz5ftk29sipnuhgitkf644bdn5kc6dtmv39ri6i9vwi2bvju8paa0po1286otsj4gztt2didh5oh9h97rc07sucoyfy4iu4kxotonrw0r1u1taq6wfcavnkyoyscler3h7e4ctiq6rwzaqlz729hvwvvckl0rw9xiuxt1c1csg3aqpgg1liteph53nx4vi0ebqc54duwsxugc969ir8guc7c3deli1nnja0wtovaov7ea == \u\z\d\p\w\1\x\f\5\4\o\s\5\w\3\j\o\h\z\h\v\3\f\k\h\m\d\b\f\2\4\o\s\6\d\7\a\f\e\b\0\3\r\i\0\6\w\x\j\r\h\r\q\w\8\g\t\z\k\k\m\y\g\8\0\5\a\t\x\a\q\x\f\1\p\f\c\8\4\v\0\p\e\5\l\r\w\1\d\0\1\d\o\p\b\o\o\q\9\b\y\v\f\6\p\r\m\p\6\k\9\d\q\u\k\r\t\3\x\s\u\1\p\w\g\v\g\z\e\2\8\u\z\l\i\2\z\t\i\x\z\a\r\m\8\1\d\7\w\9\9\5\v\n\j\f\j\r\0\p\w\g\t\b\5\m\z\c\a\c\2\s\c\w\t\8\d\o\2\e\t\f\o\n\h\d\7\j\r\g\w\3\1\9\2\9\1\7\g\x\g\p\j\1\f\m\c\i\8\y\p\6\x\w\r\b\6\8\m\3\t\3\g\u\1\2\z\6\b\t\q\1\p\j\4\7\s\n\p\2\g\0\8\m\y\0\h\4\n\2\6\6\q\h\6\j\e\0\q\9\a\x\7\n\r\s\y\l\x\m\u\1\i\2\1\j\z\5\f\t\k\2\9\s\i\p\n\u\h\g\i\t\k\f\6\4\4\b\d\n\5\k\c\6\d\t\m\v\3\9\r\i\6\i\9\v\w\i\2\b\v\j\u\8\p\a\a\0\p\o\1\2\8\6\o\t\s\j\4\g\z\t\t\2\d\i\d\h\5\o\h\9\h\9\7\r\c\0\7\s\u\c\o\y\f\y\4\i\u\4\k\x\o\t\o\n\r\w\0\r\1\u\1\t\a\q\6\w\f\c\a\v\n\k\y\o\y\s\c\l\e\r\3\h\7\e\4\c\t\i\q\6\r\w\z\a\q\l\z\7\2\9\h\v\w\v\v\c\k\l\0\r\w\9\x\i\u\x\t\1\c\1\c\s\g\3\a\q\p\g\g\1\l\i\t\e\p\h\5\3\n\x\4\v\i\0\e\b\q\c\5\4\d\u\w\s\x\u\g\c\9\6\9\i\r\8\g\u\c\7\c\3\d\e\l\i\1\n\n\j\a\0\w\t\o\v\a\o\v\7\e\a ]] 00:08:30.543 00:08:30.543 real 0m3.226s 00:08:30.543 user 0m1.520s 00:08:30.543 sys 0m0.725s 00:08:30.543 ************************************ 00:08:30.543 END TEST dd_flags_misc_forced_aio 00:08:30.543 ************************************ 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:30.543 ************************************ 00:08:30.543 END TEST spdk_dd_posix 00:08:30.543 ************************************ 00:08:30.543 00:08:30.543 real 0m15.297s 00:08:30.543 user 0m6.323s 00:08:30.543 sys 0m4.277s 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.543 01:30:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:30.544 01:30:01 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:30.544 01:30:01 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.544 01:30:01 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.544 01:30:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:30.544 ************************************ 00:08:30.544 START TEST spdk_dd_malloc 00:08:30.544 ************************************ 00:08:30.544 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:30.544 * Looking for test storage... 00:08:30.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:30.544 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:30.544 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:30.544 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.803 --rc genhtml_branch_coverage=1 00:08:30.803 --rc genhtml_function_coverage=1 00:08:30.803 --rc genhtml_legend=1 00:08:30.803 --rc geninfo_all_blocks=1 00:08:30.803 --rc geninfo_unexecuted_blocks=1 00:08:30.803 00:08:30.803 ' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.803 --rc genhtml_branch_coverage=1 00:08:30.803 --rc genhtml_function_coverage=1 00:08:30.803 --rc genhtml_legend=1 00:08:30.803 --rc geninfo_all_blocks=1 00:08:30.803 --rc geninfo_unexecuted_blocks=1 00:08:30.803 00:08:30.803 ' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.803 --rc genhtml_branch_coverage=1 00:08:30.803 --rc genhtml_function_coverage=1 00:08:30.803 --rc genhtml_legend=1 00:08:30.803 --rc geninfo_all_blocks=1 00:08:30.803 --rc geninfo_unexecuted_blocks=1 00:08:30.803 00:08:30.803 ' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.803 --rc genhtml_branch_coverage=1 00:08:30.803 --rc genhtml_function_coverage=1 00:08:30.803 --rc genhtml_legend=1 00:08:30.803 --rc geninfo_all_blocks=1 00:08:30.803 --rc geninfo_unexecuted_blocks=1 00:08:30.803 00:08:30.803 ' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.803 01:30:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:30.803 ************************************ 00:08:30.803 START TEST dd_malloc_copy 00:08:30.804 ************************************ 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:30.804 01:30:01 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:30.804 [2024-12-16 01:30:01.376567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:30.804 [2024-12-16 01:30:01.376938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75491 ] 00:08:30.804 { 00:08:30.804 "subsystems": [ 00:08:30.804 { 00:08:30.804 "subsystem": "bdev", 00:08:30.804 "config": [ 00:08:30.804 { 00:08:30.804 "params": { 00:08:30.804 "block_size": 512, 00:08:30.804 "num_blocks": 1048576, 00:08:30.804 "name": "malloc0" 00:08:30.804 }, 00:08:30.804 "method": "bdev_malloc_create" 00:08:30.804 }, 00:08:30.804 { 00:08:30.804 "params": { 00:08:30.804 "block_size": 512, 00:08:30.804 "num_blocks": 1048576, 00:08:30.804 "name": "malloc1" 00:08:30.804 }, 00:08:30.804 "method": "bdev_malloc_create" 00:08:30.804 }, 00:08:30.804 { 00:08:30.804 "method": "bdev_wait_for_examine" 00:08:30.804 } 00:08:30.804 ] 00:08:30.804 } 00:08:30.804 ] 00:08:30.804 } 00:08:31.063 [2024-12-16 01:30:01.526626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.063 [2024-12-16 01:30:01.546754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.063 [2024-12-16 01:30:01.575880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.438  [2024-12-16T01:30:04.030Z] Copying: 210/512 [MB] (210 MBps) [2024-12-16T01:30:04.289Z] Copying: 432/512 [MB] (222 MBps) [2024-12-16T01:30:04.548Z] Copying: 512/512 [MB] (average 212 MBps) 00:08:33.890 00:08:33.890 01:30:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:33.890 01:30:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:33.890 01:30:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:33.890 01:30:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:34.149 [2024-12-16 01:30:04.595336] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:34.149 [2024-12-16 01:30:04.596273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75533 ] 00:08:34.149 { 00:08:34.149 "subsystems": [ 00:08:34.149 { 00:08:34.149 "subsystem": "bdev", 00:08:34.149 "config": [ 00:08:34.149 { 00:08:34.149 "params": { 00:08:34.149 "block_size": 512, 00:08:34.149 "num_blocks": 1048576, 00:08:34.149 "name": "malloc0" 00:08:34.149 }, 00:08:34.149 "method": "bdev_malloc_create" 00:08:34.150 }, 00:08:34.150 { 00:08:34.150 "params": { 00:08:34.150 "block_size": 512, 00:08:34.150 "num_blocks": 1048576, 00:08:34.150 "name": "malloc1" 00:08:34.150 }, 00:08:34.150 "method": "bdev_malloc_create" 00:08:34.150 }, 00:08:34.150 { 00:08:34.150 "method": "bdev_wait_for_examine" 00:08:34.150 } 00:08:34.150 ] 00:08:34.150 } 00:08:34.150 ] 00:08:34.150 } 00:08:34.150 [2024-12-16 01:30:04.747303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.150 [2024-12-16 01:30:04.767917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.150 [2024-12-16 01:30:04.796924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.529  [2024-12-16T01:30:07.124Z] Copying: 226/512 [MB] (226 MBps) [2024-12-16T01:30:07.382Z] Copying: 456/512 [MB] (230 MBps) [2024-12-16T01:30:07.949Z] Copying: 512/512 [MB] (average 221 MBps) 00:08:37.291 00:08:37.291 ************************************ 00:08:37.291 END TEST dd_malloc_copy 00:08:37.291 ************************************ 00:08:37.291 00:08:37.291 real 0m6.339s 00:08:37.291 user 0m5.707s 00:08:37.291 sys 0m0.486s 00:08:37.291 01:30:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.291 01:30:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:37.291 ************************************ 00:08:37.291 END TEST spdk_dd_malloc 00:08:37.291 ************************************ 00:08:37.291 00:08:37.291 real 0m6.592s 00:08:37.291 user 0m5.847s 00:08:37.291 sys 0m0.597s 00:08:37.291 01:30:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.291 01:30:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:37.291 01:30:07 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:37.291 01:30:07 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:37.291 01:30:07 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.291 01:30:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:37.291 ************************************ 00:08:37.291 START TEST spdk_dd_bdev_to_bdev 00:08:37.291 ************************************ 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:37.291 * Looking for test storage... 00:08:37.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lcov --version 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.291 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.292 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:37.292 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:37.292 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.292 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:37.292 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.551 --rc genhtml_branch_coverage=1 00:08:37.551 --rc genhtml_function_coverage=1 00:08:37.551 --rc genhtml_legend=1 00:08:37.551 --rc geninfo_all_blocks=1 00:08:37.551 --rc geninfo_unexecuted_blocks=1 00:08:37.551 00:08:37.551 ' 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.551 --rc genhtml_branch_coverage=1 00:08:37.551 --rc genhtml_function_coverage=1 00:08:37.551 --rc genhtml_legend=1 00:08:37.551 --rc geninfo_all_blocks=1 00:08:37.551 --rc geninfo_unexecuted_blocks=1 00:08:37.551 00:08:37.551 ' 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.551 --rc genhtml_branch_coverage=1 00:08:37.551 --rc genhtml_function_coverage=1 00:08:37.551 --rc genhtml_legend=1 00:08:37.551 --rc geninfo_all_blocks=1 00:08:37.551 --rc geninfo_unexecuted_blocks=1 00:08:37.551 00:08:37.551 ' 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:37.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.551 --rc genhtml_branch_coverage=1 00:08:37.551 --rc genhtml_function_coverage=1 00:08:37.551 --rc genhtml_legend=1 00:08:37.551 --rc geninfo_all_blocks=1 00:08:37.551 --rc geninfo_unexecuted_blocks=1 00:08:37.551 00:08:37.551 ' 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:37.551 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.552 ************************************ 00:08:37.552 START TEST dd_inflate_file 00:08:37.552 ************************************ 00:08:37.552 01:30:07 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:37.552 [2024-12-16 01:30:08.031253] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:37.552 [2024-12-16 01:30:08.031505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75640 ] 00:08:37.552 [2024-12-16 01:30:08.180669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.552 [2024-12-16 01:30:08.201233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.810 [2024-12-16 01:30:08.230491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.810  [2024-12-16T01:30:08.468Z] Copying: 64/64 [MB] (average 1600 MBps) 00:08:37.810 00:08:37.810 00:08:37.810 real 0m0.423s 00:08:37.810 user 0m0.228s 00:08:37.810 sys 0m0.215s 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:37.810 ************************************ 00:08:37.810 END TEST dd_inflate_file 00:08:37.810 ************************************ 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.810 ************************************ 00:08:37.810 START TEST dd_copy_to_out_bdev 00:08:37.810 ************************************ 00:08:37.810 01:30:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:38.069 { 00:08:38.069 "subsystems": [ 00:08:38.069 { 00:08:38.069 "subsystem": "bdev", 00:08:38.069 "config": [ 00:08:38.069 { 00:08:38.069 "params": { 00:08:38.069 "trtype": "pcie", 00:08:38.069 "traddr": "0000:00:10.0", 00:08:38.069 "name": "Nvme0" 00:08:38.069 }, 00:08:38.069 "method": "bdev_nvme_attach_controller" 00:08:38.069 }, 00:08:38.069 { 00:08:38.069 "params": { 00:08:38.069 "trtype": "pcie", 00:08:38.069 "traddr": "0000:00:11.0", 00:08:38.069 "name": "Nvme1" 00:08:38.069 }, 00:08:38.069 "method": "bdev_nvme_attach_controller" 00:08:38.069 }, 00:08:38.069 { 00:08:38.069 "method": "bdev_wait_for_examine" 00:08:38.069 } 00:08:38.069 ] 00:08:38.069 } 00:08:38.069 ] 00:08:38.069 } 00:08:38.069 [2024-12-16 01:30:08.512087] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:38.069 [2024-12-16 01:30:08.512179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75679 ] 00:08:38.069 [2024-12-16 01:30:08.659616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.069 [2024-12-16 01:30:08.678850] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.069 [2024-12-16 01:30:08.707314] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.445  [2024-12-16T01:30:10.361Z] Copying: 50/64 [MB] (50 MBps) [2024-12-16T01:30:10.362Z] Copying: 64/64 [MB] (average 51 MBps) 00:08:39.704 00:08:39.704 00:08:39.704 real 0m1.829s 00:08:39.704 user 0m1.636s 00:08:39.704 sys 0m1.511s 00:08:39.704 ************************************ 00:08:39.704 END TEST dd_copy_to_out_bdev 00:08:39.704 ************************************ 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:39.704 ************************************ 00:08:39.704 START TEST dd_offset_magic 00:08:39.704 ************************************ 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:39.704 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:39.963 [2024-12-16 01:30:10.394701] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:39.963 [2024-12-16 01:30:10.394794] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75724 ] 00:08:39.963 { 00:08:39.963 "subsystems": [ 00:08:39.963 { 00:08:39.963 "subsystem": "bdev", 00:08:39.963 "config": [ 00:08:39.963 { 00:08:39.963 "params": { 00:08:39.963 "trtype": "pcie", 00:08:39.963 "traddr": "0000:00:10.0", 00:08:39.963 "name": "Nvme0" 00:08:39.963 }, 00:08:39.963 "method": "bdev_nvme_attach_controller" 00:08:39.963 }, 00:08:39.963 { 00:08:39.963 "params": { 00:08:39.963 "trtype": "pcie", 00:08:39.963 "traddr": "0000:00:11.0", 00:08:39.963 "name": "Nvme1" 00:08:39.963 }, 00:08:39.963 "method": "bdev_nvme_attach_controller" 00:08:39.963 }, 00:08:39.963 { 00:08:39.963 "method": "bdev_wait_for_examine" 00:08:39.963 } 00:08:39.963 ] 00:08:39.963 } 00:08:39.963 ] 00:08:39.963 } 00:08:39.963 [2024-12-16 01:30:10.547328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.963 [2024-12-16 01:30:10.571702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.963 [2024-12-16 01:30:10.606700] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.222  [2024-12-16T01:30:11.140Z] Copying: 65/65 [MB] (average 1000 MBps) 00:08:40.482 00:08:40.482 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:40.482 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:40.482 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:40.482 01:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:40.482 [2024-12-16 01:30:11.051672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:40.482 [2024-12-16 01:30:11.052191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75733 ] 00:08:40.482 { 00:08:40.482 "subsystems": [ 00:08:40.482 { 00:08:40.482 "subsystem": "bdev", 00:08:40.482 "config": [ 00:08:40.482 { 00:08:40.482 "params": { 00:08:40.482 "trtype": "pcie", 00:08:40.482 "traddr": "0000:00:10.0", 00:08:40.482 "name": "Nvme0" 00:08:40.482 }, 00:08:40.482 "method": "bdev_nvme_attach_controller" 00:08:40.482 }, 00:08:40.482 { 00:08:40.482 "params": { 00:08:40.482 "trtype": "pcie", 00:08:40.482 "traddr": "0000:00:11.0", 00:08:40.482 "name": "Nvme1" 00:08:40.482 }, 00:08:40.482 "method": "bdev_nvme_attach_controller" 00:08:40.482 }, 00:08:40.482 { 00:08:40.482 "method": "bdev_wait_for_examine" 00:08:40.482 } 00:08:40.482 ] 00:08:40.482 } 00:08:40.482 ] 00:08:40.482 } 00:08:40.741 [2024-12-16 01:30:11.202298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.741 [2024-12-16 01:30:11.221453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.741 [2024-12-16 01:30:11.249601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.001  [2024-12-16T01:30:11.659Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:41.001 00:08:41.001 01:30:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:41.001 01:30:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:41.001 01:30:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:41.001 01:30:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:41.001 01:30:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:41.001 01:30:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:41.001 01:30:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:41.001 { 00:08:41.001 "subsystems": [ 00:08:41.001 { 00:08:41.001 "subsystem": "bdev", 00:08:41.001 "config": [ 00:08:41.001 { 00:08:41.001 "params": { 00:08:41.001 "trtype": "pcie", 00:08:41.001 "traddr": "0000:00:10.0", 00:08:41.001 "name": "Nvme0" 00:08:41.001 }, 00:08:41.001 "method": "bdev_nvme_attach_controller" 00:08:41.001 }, 00:08:41.001 { 00:08:41.001 "params": { 00:08:41.001 "trtype": "pcie", 00:08:41.001 "traddr": "0000:00:11.0", 00:08:41.001 "name": "Nvme1" 00:08:41.001 }, 00:08:41.001 "method": "bdev_nvme_attach_controller" 00:08:41.001 }, 00:08:41.001 { 00:08:41.001 "method": "bdev_wait_for_examine" 00:08:41.001 } 00:08:41.001 ] 00:08:41.001 } 00:08:41.001 ] 00:08:41.001 } 00:08:41.001 [2024-12-16 01:30:11.589966] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:41.001 [2024-12-16 01:30:11.590057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75755 ] 00:08:41.260 [2024-12-16 01:30:11.737083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.260 [2024-12-16 01:30:11.756100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.260 [2024-12-16 01:30:11.785884] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.519  [2024-12-16T01:30:12.177Z] Copying: 65/65 [MB] (average 1031 MBps) 00:08:41.519 00:08:41.519 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:41.519 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:41.519 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:41.520 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:41.779 [2024-12-16 01:30:12.222852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:41.779 [2024-12-16 01:30:12.222967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75770 ] 00:08:41.779 { 00:08:41.779 "subsystems": [ 00:08:41.779 { 00:08:41.779 "subsystem": "bdev", 00:08:41.779 "config": [ 00:08:41.779 { 00:08:41.779 "params": { 00:08:41.779 "trtype": "pcie", 00:08:41.779 "traddr": "0000:00:10.0", 00:08:41.779 "name": "Nvme0" 00:08:41.779 }, 00:08:41.779 "method": "bdev_nvme_attach_controller" 00:08:41.779 }, 00:08:41.779 { 00:08:41.779 "params": { 00:08:41.779 "trtype": "pcie", 00:08:41.779 "traddr": "0000:00:11.0", 00:08:41.779 "name": "Nvme1" 00:08:41.779 }, 00:08:41.779 "method": "bdev_nvme_attach_controller" 00:08:41.779 }, 00:08:41.779 { 00:08:41.779 "method": "bdev_wait_for_examine" 00:08:41.779 } 00:08:41.779 ] 00:08:41.779 } 00:08:41.779 ] 00:08:41.779 } 00:08:41.779 [2024-12-16 01:30:12.372741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.779 [2024-12-16 01:30:12.393577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.779 [2024-12-16 01:30:12.422213] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.052  [2024-12-16T01:30:12.710Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:42.052 00:08:42.052 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:42.052 ************************************ 00:08:42.052 END TEST dd_offset_magic 00:08:42.052 ************************************ 00:08:42.052 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:42.052 00:08:42.052 real 0m2.359s 00:08:42.052 user 0m1.666s 00:08:42.052 sys 0m0.674s 00:08:42.052 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.052 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:42.324 01:30:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:42.324 [2024-12-16 01:30:12.786639] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:42.324 [2024-12-16 01:30:12.786703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75801 ] 00:08:42.324 { 00:08:42.324 "subsystems": [ 00:08:42.324 { 00:08:42.324 "subsystem": "bdev", 00:08:42.324 "config": [ 00:08:42.324 { 00:08:42.324 "params": { 00:08:42.324 "trtype": "pcie", 00:08:42.324 "traddr": "0000:00:10.0", 00:08:42.324 "name": "Nvme0" 00:08:42.324 }, 00:08:42.324 "method": "bdev_nvme_attach_controller" 00:08:42.324 }, 00:08:42.324 { 00:08:42.324 "params": { 00:08:42.324 "trtype": "pcie", 00:08:42.324 "traddr": "0000:00:11.0", 00:08:42.324 "name": "Nvme1" 00:08:42.324 }, 00:08:42.324 "method": "bdev_nvme_attach_controller" 00:08:42.324 }, 00:08:42.324 { 00:08:42.324 "method": "bdev_wait_for_examine" 00:08:42.324 } 00:08:42.324 ] 00:08:42.324 } 00:08:42.324 ] 00:08:42.324 } 00:08:42.324 [2024-12-16 01:30:12.924247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.324 [2024-12-16 01:30:12.943165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.324 [2024-12-16 01:30:12.970657] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.584  [2024-12-16T01:30:13.501Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:42.843 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:42.843 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:42.843 [2024-12-16 01:30:13.334407] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:42.843 [2024-12-16 01:30:13.334499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75817 ] 00:08:42.843 { 00:08:42.843 "subsystems": [ 00:08:42.843 { 00:08:42.843 "subsystem": "bdev", 00:08:42.843 "config": [ 00:08:42.843 { 00:08:42.843 "params": { 00:08:42.843 "trtype": "pcie", 00:08:42.843 "traddr": "0000:00:10.0", 00:08:42.843 "name": "Nvme0" 00:08:42.843 }, 00:08:42.843 "method": "bdev_nvme_attach_controller" 00:08:42.843 }, 00:08:42.843 { 00:08:42.843 "params": { 00:08:42.843 "trtype": "pcie", 00:08:42.843 "traddr": "0000:00:11.0", 00:08:42.843 "name": "Nvme1" 00:08:42.843 }, 00:08:42.843 "method": "bdev_nvme_attach_controller" 00:08:42.843 }, 00:08:42.843 { 00:08:42.843 "method": "bdev_wait_for_examine" 00:08:42.843 } 00:08:42.843 ] 00:08:42.843 } 00:08:42.843 ] 00:08:42.843 } 00:08:42.843 [2024-12-16 01:30:13.485826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.102 [2024-12-16 01:30:13.509235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.102 [2024-12-16 01:30:13.542144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.102  [2024-12-16T01:30:14.020Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:43.362 00:08:43.362 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:43.362 ************************************ 00:08:43.362 END TEST spdk_dd_bdev_to_bdev 00:08:43.362 ************************************ 00:08:43.362 00:08:43.362 real 0m6.098s 00:08:43.362 user 0m4.498s 00:08:43.362 sys 0m2.959s 00:08:43.362 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.362 01:30:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:43.362 01:30:13 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:43.362 01:30:13 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:43.362 01:30:13 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.362 01:30:13 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.362 01:30:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:43.362 ************************************ 00:08:43.362 START TEST spdk_dd_uring 00:08:43.362 ************************************ 00:08:43.362 01:30:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:43.362 * Looking for test storage... 00:08:43.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:43.362 01:30:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:43.362 01:30:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lcov --version 00:08:43.362 01:30:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:43.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.621 --rc genhtml_branch_coverage=1 00:08:43.621 --rc genhtml_function_coverage=1 00:08:43.621 --rc genhtml_legend=1 00:08:43.621 --rc geninfo_all_blocks=1 00:08:43.621 --rc geninfo_unexecuted_blocks=1 00:08:43.621 00:08:43.621 ' 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:43.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.621 --rc genhtml_branch_coverage=1 00:08:43.621 --rc genhtml_function_coverage=1 00:08:43.621 --rc genhtml_legend=1 00:08:43.621 --rc geninfo_all_blocks=1 00:08:43.621 --rc geninfo_unexecuted_blocks=1 00:08:43.621 00:08:43.621 ' 00:08:43.621 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:43.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.621 --rc genhtml_branch_coverage=1 00:08:43.621 --rc genhtml_function_coverage=1 00:08:43.621 --rc genhtml_legend=1 00:08:43.621 --rc geninfo_all_blocks=1 00:08:43.621 --rc geninfo_unexecuted_blocks=1 00:08:43.621 00:08:43.621 ' 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:43.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.622 --rc genhtml_branch_coverage=1 00:08:43.622 --rc genhtml_function_coverage=1 00:08:43.622 --rc genhtml_legend=1 00:08:43.622 --rc geninfo_all_blocks=1 00:08:43.622 --rc geninfo_unexecuted_blocks=1 00:08:43.622 00:08:43.622 ' 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:43.622 ************************************ 00:08:43.622 START TEST dd_uring_copy 00:08:43.622 ************************************ 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=xw2zrfrehxyu1dm7q6oc49ph6eocjymc7d8qpa8wqkallchcruvsclo8jag2l7yznii6su0pbdcx4wa33ptp9d1f8emw2uzhihz5purjcbcs1uk3cjzsyhws7vejy9mkvr5vukgmxwwbwexg0b92zjkia0fnr63lswag9v68r8kescr9nmwzqi3lt2cq72491ioseshe24q9x2z2hr1ks0ajg6uv4t2bs67n87xk3v8leq5s3ks37ki1yoknnhkas5sm25fpf3zdtmorugsl75dz6ir9f29lcz9fkdpq8g44kkxp67x0ywb6idpmhwvq0djgz4defmegtj3evycaz7v9jgmsimarxri9pmcivkekyjg71og8606rgkxr5ort8fw9rzula7jya4qtfoupqcmxyj1g32ptn7v4ph1jr43htuva78zbyri4lyb71lo25gxkk4vl3wqamlsn03mx6xvbr5zpciqntdxpgjjb8ykeejz5q2o6m5foioya0rnl4w8zm0x5x45kqhqtfjw01ochv1madxhc4y8ryv72ysfbs6qu8byt0z9wo0wlgtcmak7n4tbycxsbrfhpflfslaziqe9jxxry0iio2g6vo0aq6j9phnrl4c5w1pkku4a8pjemam727fhc00op4fgxn4p2nmm98qr9daocs8sh8mkmsxknpiu3j5i4h91a710gsohed3tafh25fxues42gradghaldzn4wq2ykvcrbi1xsiwczoi7rkpsks6ns2d1smlupf99dni6bhkjezavn1ny6w1t2fcq44frxbahc4zap4glwf1vh3mzx4tuk6oidhfq7r6du0pycxzxcxnu2tc3d1ubhalozn1dwrmuydgcm17zzfmvvz7xa2pjxgkgpeu2yd7vwrn7rr77qodr9lq4bt3rcbwxgafw3cmbf2mp2dt8gahpvkgwzolt2u5jgwisa8ffepfgdagnflj8qzi9fqvwf4o8fur1oqfm0vvya3mkm0heg7h4ur5f05m8n 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo xw2zrfrehxyu1dm7q6oc49ph6eocjymc7d8qpa8wqkallchcruvsclo8jag2l7yznii6su0pbdcx4wa33ptp9d1f8emw2uzhihz5purjcbcs1uk3cjzsyhws7vejy9mkvr5vukgmxwwbwexg0b92zjkia0fnr63lswag9v68r8kescr9nmwzqi3lt2cq72491ioseshe24q9x2z2hr1ks0ajg6uv4t2bs67n87xk3v8leq5s3ks37ki1yoknnhkas5sm25fpf3zdtmorugsl75dz6ir9f29lcz9fkdpq8g44kkxp67x0ywb6idpmhwvq0djgz4defmegtj3evycaz7v9jgmsimarxri9pmcivkekyjg71og8606rgkxr5ort8fw9rzula7jya4qtfoupqcmxyj1g32ptn7v4ph1jr43htuva78zbyri4lyb71lo25gxkk4vl3wqamlsn03mx6xvbr5zpciqntdxpgjjb8ykeejz5q2o6m5foioya0rnl4w8zm0x5x45kqhqtfjw01ochv1madxhc4y8ryv72ysfbs6qu8byt0z9wo0wlgtcmak7n4tbycxsbrfhpflfslaziqe9jxxry0iio2g6vo0aq6j9phnrl4c5w1pkku4a8pjemam727fhc00op4fgxn4p2nmm98qr9daocs8sh8mkmsxknpiu3j5i4h91a710gsohed3tafh25fxues42gradghaldzn4wq2ykvcrbi1xsiwczoi7rkpsks6ns2d1smlupf99dni6bhkjezavn1ny6w1t2fcq44frxbahc4zap4glwf1vh3mzx4tuk6oidhfq7r6du0pycxzxcxnu2tc3d1ubhalozn1dwrmuydgcm17zzfmvvz7xa2pjxgkgpeu2yd7vwrn7rr77qodr9lq4bt3rcbwxgafw3cmbf2mp2dt8gahpvkgwzolt2u5jgwisa8ffepfgdagnflj8qzi9fqvwf4o8fur1oqfm0vvya3mkm0heg7h4ur5f05m8n 00:08:43.622 01:30:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:43.622 [2024-12-16 01:30:14.198498] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:43.622 [2024-12-16 01:30:14.198774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75889 ] 00:08:43.882 [2024-12-16 01:30:14.336712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.882 [2024-12-16 01:30:14.355750] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.882 [2024-12-16 01:30:14.383889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.450  [2024-12-16T01:30:15.108Z] Copying: 511/511 [MB] (average 1430 MBps) 00:08:44.450 00:08:44.450 01:30:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:44.450 01:30:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:44.450 01:30:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:44.450 01:30:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:44.709 { 00:08:44.709 "subsystems": [ 00:08:44.709 { 00:08:44.709 "subsystem": "bdev", 00:08:44.709 "config": [ 00:08:44.709 { 00:08:44.709 "params": { 00:08:44.709 "block_size": 512, 00:08:44.709 "num_blocks": 1048576, 00:08:44.709 "name": "malloc0" 00:08:44.709 }, 00:08:44.709 "method": "bdev_malloc_create" 00:08:44.709 }, 00:08:44.709 { 00:08:44.709 "params": { 00:08:44.709 "filename": "/dev/zram1", 00:08:44.709 "name": "uring0" 00:08:44.709 }, 00:08:44.709 "method": "bdev_uring_create" 00:08:44.709 }, 00:08:44.709 { 00:08:44.709 "method": "bdev_wait_for_examine" 00:08:44.709 } 00:08:44.709 ] 00:08:44.709 } 00:08:44.709 ] 00:08:44.709 } 00:08:44.709 [2024-12-16 01:30:15.147126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:44.709 [2024-12-16 01:30:15.147226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75905 ] 00:08:44.709 [2024-12-16 01:30:15.293787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.709 [2024-12-16 01:30:15.313834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.709 [2024-12-16 01:30:15.344288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.085  [2024-12-16T01:30:17.679Z] Copying: 230/512 [MB] (230 MBps) [2024-12-16T01:30:17.679Z] Copying: 471/512 [MB] (241 MBps) [2024-12-16T01:30:17.938Z] Copying: 512/512 [MB] (average 236 MBps) 00:08:47.280 00:08:47.280 01:30:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:47.280 01:30:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:47.280 01:30:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:47.280 01:30:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:47.280 [2024-12-16 01:30:17.883166] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:47.280 [2024-12-16 01:30:17.883254] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75949 ] 00:08:47.280 { 00:08:47.280 "subsystems": [ 00:08:47.280 { 00:08:47.280 "subsystem": "bdev", 00:08:47.280 "config": [ 00:08:47.280 { 00:08:47.280 "params": { 00:08:47.280 "block_size": 512, 00:08:47.280 "num_blocks": 1048576, 00:08:47.280 "name": "malloc0" 00:08:47.280 }, 00:08:47.280 "method": "bdev_malloc_create" 00:08:47.280 }, 00:08:47.280 { 00:08:47.280 "params": { 00:08:47.280 "filename": "/dev/zram1", 00:08:47.280 "name": "uring0" 00:08:47.280 }, 00:08:47.280 "method": "bdev_uring_create" 00:08:47.280 }, 00:08:47.280 { 00:08:47.280 "method": "bdev_wait_for_examine" 00:08:47.280 } 00:08:47.280 ] 00:08:47.280 } 00:08:47.280 ] 00:08:47.280 } 00:08:47.540 [2024-12-16 01:30:18.024844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.540 [2024-12-16 01:30:18.042778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.540 [2024-12-16 01:30:18.070145] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:48.917  [2024-12-16T01:30:20.511Z] Copying: 169/512 [MB] (169 MBps) [2024-12-16T01:30:21.447Z] Copying: 327/512 [MB] (158 MBps) [2024-12-16T01:30:21.706Z] Copying: 465/512 [MB] (138 MBps) [2024-12-16T01:30:21.965Z] Copying: 512/512 [MB] (average 151 MBps) 00:08:51.307 00:08:51.307 01:30:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:51.307 01:30:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ xw2zrfrehxyu1dm7q6oc49ph6eocjymc7d8qpa8wqkallchcruvsclo8jag2l7yznii6su0pbdcx4wa33ptp9d1f8emw2uzhihz5purjcbcs1uk3cjzsyhws7vejy9mkvr5vukgmxwwbwexg0b92zjkia0fnr63lswag9v68r8kescr9nmwzqi3lt2cq72491ioseshe24q9x2z2hr1ks0ajg6uv4t2bs67n87xk3v8leq5s3ks37ki1yoknnhkas5sm25fpf3zdtmorugsl75dz6ir9f29lcz9fkdpq8g44kkxp67x0ywb6idpmhwvq0djgz4defmegtj3evycaz7v9jgmsimarxri9pmcivkekyjg71og8606rgkxr5ort8fw9rzula7jya4qtfoupqcmxyj1g32ptn7v4ph1jr43htuva78zbyri4lyb71lo25gxkk4vl3wqamlsn03mx6xvbr5zpciqntdxpgjjb8ykeejz5q2o6m5foioya0rnl4w8zm0x5x45kqhqtfjw01ochv1madxhc4y8ryv72ysfbs6qu8byt0z9wo0wlgtcmak7n4tbycxsbrfhpflfslaziqe9jxxry0iio2g6vo0aq6j9phnrl4c5w1pkku4a8pjemam727fhc00op4fgxn4p2nmm98qr9daocs8sh8mkmsxknpiu3j5i4h91a710gsohed3tafh25fxues42gradghaldzn4wq2ykvcrbi1xsiwczoi7rkpsks6ns2d1smlupf99dni6bhkjezavn1ny6w1t2fcq44frxbahc4zap4glwf1vh3mzx4tuk6oidhfq7r6du0pycxzxcxnu2tc3d1ubhalozn1dwrmuydgcm17zzfmvvz7xa2pjxgkgpeu2yd7vwrn7rr77qodr9lq4bt3rcbwxgafw3cmbf2mp2dt8gahpvkgwzolt2u5jgwisa8ffepfgdagnflj8qzi9fqvwf4o8fur1oqfm0vvya3mkm0heg7h4ur5f05m8n == \x\w\2\z\r\f\r\e\h\x\y\u\1\d\m\7\q\6\o\c\4\9\p\h\6\e\o\c\j\y\m\c\7\d\8\q\p\a\8\w\q\k\a\l\l\c\h\c\r\u\v\s\c\l\o\8\j\a\g\2\l\7\y\z\n\i\i\6\s\u\0\p\b\d\c\x\4\w\a\3\3\p\t\p\9\d\1\f\8\e\m\w\2\u\z\h\i\h\z\5\p\u\r\j\c\b\c\s\1\u\k\3\c\j\z\s\y\h\w\s\7\v\e\j\y\9\m\k\v\r\5\v\u\k\g\m\x\w\w\b\w\e\x\g\0\b\9\2\z\j\k\i\a\0\f\n\r\6\3\l\s\w\a\g\9\v\6\8\r\8\k\e\s\c\r\9\n\m\w\z\q\i\3\l\t\2\c\q\7\2\4\9\1\i\o\s\e\s\h\e\2\4\q\9\x\2\z\2\h\r\1\k\s\0\a\j\g\6\u\v\4\t\2\b\s\6\7\n\8\7\x\k\3\v\8\l\e\q\5\s\3\k\s\3\7\k\i\1\y\o\k\n\n\h\k\a\s\5\s\m\2\5\f\p\f\3\z\d\t\m\o\r\u\g\s\l\7\5\d\z\6\i\r\9\f\2\9\l\c\z\9\f\k\d\p\q\8\g\4\4\k\k\x\p\6\7\x\0\y\w\b\6\i\d\p\m\h\w\v\q\0\d\j\g\z\4\d\e\f\m\e\g\t\j\3\e\v\y\c\a\z\7\v\9\j\g\m\s\i\m\a\r\x\r\i\9\p\m\c\i\v\k\e\k\y\j\g\7\1\o\g\8\6\0\6\r\g\k\x\r\5\o\r\t\8\f\w\9\r\z\u\l\a\7\j\y\a\4\q\t\f\o\u\p\q\c\m\x\y\j\1\g\3\2\p\t\n\7\v\4\p\h\1\j\r\4\3\h\t\u\v\a\7\8\z\b\y\r\i\4\l\y\b\7\1\l\o\2\5\g\x\k\k\4\v\l\3\w\q\a\m\l\s\n\0\3\m\x\6\x\v\b\r\5\z\p\c\i\q\n\t\d\x\p\g\j\j\b\8\y\k\e\e\j\z\5\q\2\o\6\m\5\f\o\i\o\y\a\0\r\n\l\4\w\8\z\m\0\x\5\x\4\5\k\q\h\q\t\f\j\w\0\1\o\c\h\v\1\m\a\d\x\h\c\4\y\8\r\y\v\7\2\y\s\f\b\s\6\q\u\8\b\y\t\0\z\9\w\o\0\w\l\g\t\c\m\a\k\7\n\4\t\b\y\c\x\s\b\r\f\h\p\f\l\f\s\l\a\z\i\q\e\9\j\x\x\r\y\0\i\i\o\2\g\6\v\o\0\a\q\6\j\9\p\h\n\r\l\4\c\5\w\1\p\k\k\u\4\a\8\p\j\e\m\a\m\7\2\7\f\h\c\0\0\o\p\4\f\g\x\n\4\p\2\n\m\m\9\8\q\r\9\d\a\o\c\s\8\s\h\8\m\k\m\s\x\k\n\p\i\u\3\j\5\i\4\h\9\1\a\7\1\0\g\s\o\h\e\d\3\t\a\f\h\2\5\f\x\u\e\s\4\2\g\r\a\d\g\h\a\l\d\z\n\4\w\q\2\y\k\v\c\r\b\i\1\x\s\i\w\c\z\o\i\7\r\k\p\s\k\s\6\n\s\2\d\1\s\m\l\u\p\f\9\9\d\n\i\6\b\h\k\j\e\z\a\v\n\1\n\y\6\w\1\t\2\f\c\q\4\4\f\r\x\b\a\h\c\4\z\a\p\4\g\l\w\f\1\v\h\3\m\z\x\4\t\u\k\6\o\i\d\h\f\q\7\r\6\d\u\0\p\y\c\x\z\x\c\x\n\u\2\t\c\3\d\1\u\b\h\a\l\o\z\n\1\d\w\r\m\u\y\d\g\c\m\1\7\z\z\f\m\v\v\z\7\x\a\2\p\j\x\g\k\g\p\e\u\2\y\d\7\v\w\r\n\7\r\r\7\7\q\o\d\r\9\l\q\4\b\t\3\r\c\b\w\x\g\a\f\w\3\c\m\b\f\2\m\p\2\d\t\8\g\a\h\p\v\k\g\w\z\o\l\t\2\u\5\j\g\w\i\s\a\8\f\f\e\p\f\g\d\a\g\n\f\l\j\8\q\z\i\9\f\q\v\w\f\4\o\8\f\u\r\1\o\q\f\m\0\v\v\y\a\3\m\k\m\0\h\e\g\7\h\4\u\r\5\f\0\5\m\8\n ]] 00:08:51.307 01:30:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:51.307 01:30:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ xw2zrfrehxyu1dm7q6oc49ph6eocjymc7d8qpa8wqkallchcruvsclo8jag2l7yznii6su0pbdcx4wa33ptp9d1f8emw2uzhihz5purjcbcs1uk3cjzsyhws7vejy9mkvr5vukgmxwwbwexg0b92zjkia0fnr63lswag9v68r8kescr9nmwzqi3lt2cq72491ioseshe24q9x2z2hr1ks0ajg6uv4t2bs67n87xk3v8leq5s3ks37ki1yoknnhkas5sm25fpf3zdtmorugsl75dz6ir9f29lcz9fkdpq8g44kkxp67x0ywb6idpmhwvq0djgz4defmegtj3evycaz7v9jgmsimarxri9pmcivkekyjg71og8606rgkxr5ort8fw9rzula7jya4qtfoupqcmxyj1g32ptn7v4ph1jr43htuva78zbyri4lyb71lo25gxkk4vl3wqamlsn03mx6xvbr5zpciqntdxpgjjb8ykeejz5q2o6m5foioya0rnl4w8zm0x5x45kqhqtfjw01ochv1madxhc4y8ryv72ysfbs6qu8byt0z9wo0wlgtcmak7n4tbycxsbrfhpflfslaziqe9jxxry0iio2g6vo0aq6j9phnrl4c5w1pkku4a8pjemam727fhc00op4fgxn4p2nmm98qr9daocs8sh8mkmsxknpiu3j5i4h91a710gsohed3tafh25fxues42gradghaldzn4wq2ykvcrbi1xsiwczoi7rkpsks6ns2d1smlupf99dni6bhkjezavn1ny6w1t2fcq44frxbahc4zap4glwf1vh3mzx4tuk6oidhfq7r6du0pycxzxcxnu2tc3d1ubhalozn1dwrmuydgcm17zzfmvvz7xa2pjxgkgpeu2yd7vwrn7rr77qodr9lq4bt3rcbwxgafw3cmbf2mp2dt8gahpvkgwzolt2u5jgwisa8ffepfgdagnflj8qzi9fqvwf4o8fur1oqfm0vvya3mkm0heg7h4ur5f05m8n == \x\w\2\z\r\f\r\e\h\x\y\u\1\d\m\7\q\6\o\c\4\9\p\h\6\e\o\c\j\y\m\c\7\d\8\q\p\a\8\w\q\k\a\l\l\c\h\c\r\u\v\s\c\l\o\8\j\a\g\2\l\7\y\z\n\i\i\6\s\u\0\p\b\d\c\x\4\w\a\3\3\p\t\p\9\d\1\f\8\e\m\w\2\u\z\h\i\h\z\5\p\u\r\j\c\b\c\s\1\u\k\3\c\j\z\s\y\h\w\s\7\v\e\j\y\9\m\k\v\r\5\v\u\k\g\m\x\w\w\b\w\e\x\g\0\b\9\2\z\j\k\i\a\0\f\n\r\6\3\l\s\w\a\g\9\v\6\8\r\8\k\e\s\c\r\9\n\m\w\z\q\i\3\l\t\2\c\q\7\2\4\9\1\i\o\s\e\s\h\e\2\4\q\9\x\2\z\2\h\r\1\k\s\0\a\j\g\6\u\v\4\t\2\b\s\6\7\n\8\7\x\k\3\v\8\l\e\q\5\s\3\k\s\3\7\k\i\1\y\o\k\n\n\h\k\a\s\5\s\m\2\5\f\p\f\3\z\d\t\m\o\r\u\g\s\l\7\5\d\z\6\i\r\9\f\2\9\l\c\z\9\f\k\d\p\q\8\g\4\4\k\k\x\p\6\7\x\0\y\w\b\6\i\d\p\m\h\w\v\q\0\d\j\g\z\4\d\e\f\m\e\g\t\j\3\e\v\y\c\a\z\7\v\9\j\g\m\s\i\m\a\r\x\r\i\9\p\m\c\i\v\k\e\k\y\j\g\7\1\o\g\8\6\0\6\r\g\k\x\r\5\o\r\t\8\f\w\9\r\z\u\l\a\7\j\y\a\4\q\t\f\o\u\p\q\c\m\x\y\j\1\g\3\2\p\t\n\7\v\4\p\h\1\j\r\4\3\h\t\u\v\a\7\8\z\b\y\r\i\4\l\y\b\7\1\l\o\2\5\g\x\k\k\4\v\l\3\w\q\a\m\l\s\n\0\3\m\x\6\x\v\b\r\5\z\p\c\i\q\n\t\d\x\p\g\j\j\b\8\y\k\e\e\j\z\5\q\2\o\6\m\5\f\o\i\o\y\a\0\r\n\l\4\w\8\z\m\0\x\5\x\4\5\k\q\h\q\t\f\j\w\0\1\o\c\h\v\1\m\a\d\x\h\c\4\y\8\r\y\v\7\2\y\s\f\b\s\6\q\u\8\b\y\t\0\z\9\w\o\0\w\l\g\t\c\m\a\k\7\n\4\t\b\y\c\x\s\b\r\f\h\p\f\l\f\s\l\a\z\i\q\e\9\j\x\x\r\y\0\i\i\o\2\g\6\v\o\0\a\q\6\j\9\p\h\n\r\l\4\c\5\w\1\p\k\k\u\4\a\8\p\j\e\m\a\m\7\2\7\f\h\c\0\0\o\p\4\f\g\x\n\4\p\2\n\m\m\9\8\q\r\9\d\a\o\c\s\8\s\h\8\m\k\m\s\x\k\n\p\i\u\3\j\5\i\4\h\9\1\a\7\1\0\g\s\o\h\e\d\3\t\a\f\h\2\5\f\x\u\e\s\4\2\g\r\a\d\g\h\a\l\d\z\n\4\w\q\2\y\k\v\c\r\b\i\1\x\s\i\w\c\z\o\i\7\r\k\p\s\k\s\6\n\s\2\d\1\s\m\l\u\p\f\9\9\d\n\i\6\b\h\k\j\e\z\a\v\n\1\n\y\6\w\1\t\2\f\c\q\4\4\f\r\x\b\a\h\c\4\z\a\p\4\g\l\w\f\1\v\h\3\m\z\x\4\t\u\k\6\o\i\d\h\f\q\7\r\6\d\u\0\p\y\c\x\z\x\c\x\n\u\2\t\c\3\d\1\u\b\h\a\l\o\z\n\1\d\w\r\m\u\y\d\g\c\m\1\7\z\z\f\m\v\v\z\7\x\a\2\p\j\x\g\k\g\p\e\u\2\y\d\7\v\w\r\n\7\r\r\7\7\q\o\d\r\9\l\q\4\b\t\3\r\c\b\w\x\g\a\f\w\3\c\m\b\f\2\m\p\2\d\t\8\g\a\h\p\v\k\g\w\z\o\l\t\2\u\5\j\g\w\i\s\a\8\f\f\e\p\f\g\d\a\g\n\f\l\j\8\q\z\i\9\f\q\v\w\f\4\o\8\f\u\r\1\o\q\f\m\0\v\v\y\a\3\m\k\m\0\h\e\g\7\h\4\u\r\5\f\0\5\m\8\n ]] 00:08:51.307 01:30:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:51.876 01:30:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:51.876 01:30:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:51.876 01:30:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:51.876 01:30:22 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:51.876 [2024-12-16 01:30:22.289974] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:51.876 [2024-12-16 01:30:22.290068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76011 ] 00:08:51.876 { 00:08:51.876 "subsystems": [ 00:08:51.876 { 00:08:51.876 "subsystem": "bdev", 00:08:51.876 "config": [ 00:08:51.876 { 00:08:51.876 "params": { 00:08:51.876 "block_size": 512, 00:08:51.876 "num_blocks": 1048576, 00:08:51.876 "name": "malloc0" 00:08:51.876 }, 00:08:51.876 "method": "bdev_malloc_create" 00:08:51.876 }, 00:08:51.876 { 00:08:51.876 "params": { 00:08:51.876 "filename": "/dev/zram1", 00:08:51.876 "name": "uring0" 00:08:51.876 }, 00:08:51.876 "method": "bdev_uring_create" 00:08:51.876 }, 00:08:51.876 { 00:08:51.876 "method": "bdev_wait_for_examine" 00:08:51.876 } 00:08:51.876 ] 00:08:51.876 } 00:08:51.876 ] 00:08:51.876 } 00:08:51.876 [2024-12-16 01:30:22.435597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.876 [2024-12-16 01:30:22.458901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.876 [2024-12-16 01:30:22.492593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.251  [2024-12-16T01:30:24.871Z] Copying: 133/512 [MB] (133 MBps) [2024-12-16T01:30:25.805Z] Copying: 263/512 [MB] (130 MBps) [2024-12-16T01:30:26.743Z] Copying: 394/512 [MB] (130 MBps) [2024-12-16T01:30:26.743Z] Copying: 512/512 [MB] (average 131 MBps) 00:08:56.085 00:08:56.085 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:56.085 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:56.085 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:56.085 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:56.085 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:56.085 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:56.344 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:56.344 01:30:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:56.344 [2024-12-16 01:30:26.796180] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:56.344 [2024-12-16 01:30:26.796449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76073 ] 00:08:56.344 { 00:08:56.344 "subsystems": [ 00:08:56.344 { 00:08:56.344 "subsystem": "bdev", 00:08:56.344 "config": [ 00:08:56.344 { 00:08:56.344 "params": { 00:08:56.344 "block_size": 512, 00:08:56.344 "num_blocks": 1048576, 00:08:56.344 "name": "malloc0" 00:08:56.344 }, 00:08:56.344 "method": "bdev_malloc_create" 00:08:56.344 }, 00:08:56.344 { 00:08:56.344 "params": { 00:08:56.344 "filename": "/dev/zram1", 00:08:56.344 "name": "uring0" 00:08:56.344 }, 00:08:56.344 "method": "bdev_uring_create" 00:08:56.344 }, 00:08:56.344 { 00:08:56.344 "params": { 00:08:56.344 "name": "uring0" 00:08:56.344 }, 00:08:56.344 "method": "bdev_uring_delete" 00:08:56.344 }, 00:08:56.344 { 00:08:56.344 "method": "bdev_wait_for_examine" 00:08:56.344 } 00:08:56.344 ] 00:08:56.344 } 00:08:56.344 ] 00:08:56.344 } 00:08:56.344 [2024-12-16 01:30:26.949461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.344 [2024-12-16 01:30:26.978452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.604 [2024-12-16 01:30:27.017796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.604  [2024-12-16T01:30:27.521Z] Copying: 0/0 [B] (average 0 Bps) 00:08:56.863 00:08:56.863 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:56.863 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:56.863 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:56.863 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:56.863 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:56.863 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:56.863 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:56.864 01:30:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:56.864 { 00:08:56.864 "subsystems": [ 00:08:56.864 { 00:08:56.864 "subsystem": "bdev", 00:08:56.864 "config": [ 00:08:56.864 { 00:08:56.864 "params": { 00:08:56.864 "block_size": 512, 00:08:56.864 "num_blocks": 1048576, 00:08:56.864 "name": "malloc0" 00:08:56.864 }, 00:08:56.864 "method": "bdev_malloc_create" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "params": { 00:08:56.864 "filename": "/dev/zram1", 00:08:56.864 "name": "uring0" 00:08:56.864 }, 00:08:56.864 "method": "bdev_uring_create" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "params": { 00:08:56.864 "name": "uring0" 00:08:56.864 }, 00:08:56.864 "method": "bdev_uring_delete" 00:08:56.864 }, 00:08:56.864 { 00:08:56.864 "method": "bdev_wait_for_examine" 00:08:56.864 } 00:08:56.864 ] 00:08:56.864 } 00:08:56.864 ] 00:08:56.864 } 00:08:56.864 [2024-12-16 01:30:27.457285] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:56.864 [2024-12-16 01:30:27.457380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76096 ] 00:08:57.123 [2024-12-16 01:30:27.612642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.123 [2024-12-16 01:30:27.640623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.123 [2024-12-16 01:30:27.680057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.383 [2024-12-16 01:30:27.813505] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:57.383 [2024-12-16 01:30:27.813575] spdk_dd.c: 931:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:57.383 [2024-12-16 01:30:27.813588] spdk_dd.c:1088:dd_run: *ERROR*: uring0: No such device 00:08:57.383 [2024-12-16 01:30:27.813598] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:57.383 [2024-12-16 01:30:28.008515] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:57.642 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:57.642 00:08:57.642 real 0m14.182s 00:08:57.642 user 0m9.744s 00:08:57.642 sys 0m11.998s 00:08:57.901 ************************************ 00:08:57.901 END TEST dd_uring_copy 00:08:57.901 ************************************ 00:08:57.901 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.901 01:30:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:57.901 ************************************ 00:08:57.901 END TEST spdk_dd_uring 00:08:57.901 ************************************ 00:08:57.901 00:08:57.901 real 0m14.431s 00:08:57.901 user 0m9.890s 00:08:57.901 sys 0m12.103s 00:08:57.901 01:30:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.901 01:30:28 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:57.901 01:30:28 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:57.901 01:30:28 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.901 01:30:28 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.901 01:30:28 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:57.901 ************************************ 00:08:57.901 START TEST spdk_dd_sparse 00:08:57.901 ************************************ 00:08:57.901 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:57.901 * Looking for test storage... 00:08:57.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:57.901 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:57.901 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lcov --version 00:08:57.901 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.902 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:58.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.161 --rc genhtml_branch_coverage=1 00:08:58.161 --rc genhtml_function_coverage=1 00:08:58.161 --rc genhtml_legend=1 00:08:58.161 --rc geninfo_all_blocks=1 00:08:58.161 --rc geninfo_unexecuted_blocks=1 00:08:58.161 00:08:58.161 ' 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.161 01:30:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:58.162 1+0 records in 00:08:58.162 1+0 records out 00:08:58.162 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00772956 s, 543 MB/s 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:58.162 1+0 records in 00:08:58.162 1+0 records out 00:08:58.162 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00720066 s, 582 MB/s 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:58.162 1+0 records in 00:08:58.162 1+0 records out 00:08:58.162 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00467633 s, 897 MB/s 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:58.162 ************************************ 00:08:58.162 START TEST dd_sparse_file_to_file 00:08:58.162 ************************************ 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:58.162 01:30:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:58.162 [2024-12-16 01:30:28.677173] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:58.162 [2024-12-16 01:30:28.677269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76196 ] 00:08:58.162 { 00:08:58.162 "subsystems": [ 00:08:58.162 { 00:08:58.162 "subsystem": "bdev", 00:08:58.162 "config": [ 00:08:58.162 { 00:08:58.162 "params": { 00:08:58.162 "block_size": 4096, 00:08:58.162 "filename": "dd_sparse_aio_disk", 00:08:58.162 "name": "dd_aio" 00:08:58.162 }, 00:08:58.162 "method": "bdev_aio_create" 00:08:58.162 }, 00:08:58.162 { 00:08:58.162 "params": { 00:08:58.162 "lvs_name": "dd_lvstore", 00:08:58.162 "bdev_name": "dd_aio" 00:08:58.162 }, 00:08:58.162 "method": "bdev_lvol_create_lvstore" 00:08:58.162 }, 00:08:58.162 { 00:08:58.162 "method": "bdev_wait_for_examine" 00:08:58.162 } 00:08:58.162 ] 00:08:58.162 } 00:08:58.162 ] 00:08:58.162 } 00:08:58.421 [2024-12-16 01:30:28.830782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.422 [2024-12-16 01:30:28.857809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.422 [2024-12-16 01:30:28.895183] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.422  [2024-12-16T01:30:29.339Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:58.681 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:58.681 00:08:58.681 real 0m0.527s 00:08:58.681 user 0m0.315s 00:08:58.681 sys 0m0.269s 00:08:58.681 ************************************ 00:08:58.681 END TEST dd_sparse_file_to_file 00:08:58.681 ************************************ 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:58.681 ************************************ 00:08:58.681 START TEST dd_sparse_file_to_bdev 00:08:58.681 ************************************ 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:58.681 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:58.681 [2024-12-16 01:30:29.244017] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:58.681 [2024-12-16 01:30:29.244092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76238 ] 00:08:58.681 { 00:08:58.681 "subsystems": [ 00:08:58.681 { 00:08:58.681 "subsystem": "bdev", 00:08:58.681 "config": [ 00:08:58.681 { 00:08:58.681 "params": { 00:08:58.681 "block_size": 4096, 00:08:58.681 "filename": "dd_sparse_aio_disk", 00:08:58.681 "name": "dd_aio" 00:08:58.681 }, 00:08:58.681 "method": "bdev_aio_create" 00:08:58.681 }, 00:08:58.681 { 00:08:58.681 "params": { 00:08:58.681 "lvs_name": "dd_lvstore", 00:08:58.681 "lvol_name": "dd_lvol", 00:08:58.681 "size_in_mib": 36, 00:08:58.681 "thin_provision": true 00:08:58.681 }, 00:08:58.681 "method": "bdev_lvol_create" 00:08:58.681 }, 00:08:58.681 { 00:08:58.681 "method": "bdev_wait_for_examine" 00:08:58.681 } 00:08:58.681 ] 00:08:58.681 } 00:08:58.681 ] 00:08:58.681 } 00:08:58.941 [2024-12-16 01:30:29.390498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.941 [2024-12-16 01:30:29.415811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.941 [2024-12-16 01:30:29.450886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:58.941  [2024-12-16T01:30:29.859Z] Copying: 12/36 [MB] (average 521 MBps) 00:08:59.201 00:08:59.201 ************************************ 00:08:59.201 END TEST dd_sparse_file_to_bdev 00:08:59.201 ************************************ 00:08:59.201 00:08:59.201 real 0m0.506s 00:08:59.201 user 0m0.296s 00:08:59.201 sys 0m0.273s 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:59.201 ************************************ 00:08:59.201 START TEST dd_sparse_bdev_to_file 00:08:59.201 ************************************ 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:59.201 01:30:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:59.201 [2024-12-16 01:30:29.805895] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:59.201 [2024-12-16 01:30:29.805995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76271 ] 00:08:59.201 { 00:08:59.201 "subsystems": [ 00:08:59.201 { 00:08:59.201 "subsystem": "bdev", 00:08:59.201 "config": [ 00:08:59.201 { 00:08:59.201 "params": { 00:08:59.201 "block_size": 4096, 00:08:59.201 "filename": "dd_sparse_aio_disk", 00:08:59.201 "name": "dd_aio" 00:08:59.201 }, 00:08:59.201 "method": "bdev_aio_create" 00:08:59.201 }, 00:08:59.201 { 00:08:59.201 "method": "bdev_wait_for_examine" 00:08:59.201 } 00:08:59.201 ] 00:08:59.201 } 00:08:59.201 ] 00:08:59.201 } 00:08:59.460 [2024-12-16 01:30:29.957988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.460 [2024-12-16 01:30:29.986450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.460 [2024-12-16 01:30:30.026996] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.460  [2024-12-16T01:30:30.377Z] Copying: 12/36 [MB] (average 666 MBps) 00:08:59.719 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:59.719 00:08:59.719 real 0m0.524s 00:08:59.719 user 0m0.310s 00:08:59.719 sys 0m0.280s 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.719 ************************************ 00:08:59.719 END TEST dd_sparse_bdev_to_file 00:08:59.719 ************************************ 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:59.719 ************************************ 00:08:59.719 END TEST spdk_dd_sparse 00:08:59.719 ************************************ 00:08:59.719 00:08:59.719 real 0m1.946s 00:08:59.719 user 0m1.094s 00:08:59.719 sys 0m1.031s 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.719 01:30:30 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:59.979 01:30:30 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:59.979 01:30:30 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.979 01:30:30 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.979 01:30:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:59.979 ************************************ 00:08:59.979 START TEST spdk_dd_negative 00:08:59.979 ************************************ 00:08:59.979 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:59.979 * Looking for test storage... 00:08:59.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:59.979 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lcov --version 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:59.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.980 --rc genhtml_branch_coverage=1 00:08:59.980 --rc genhtml_function_coverage=1 00:08:59.980 --rc genhtml_legend=1 00:08:59.980 --rc geninfo_all_blocks=1 00:08:59.980 --rc geninfo_unexecuted_blocks=1 00:08:59.980 00:08:59.980 ' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:59.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.980 --rc genhtml_branch_coverage=1 00:08:59.980 --rc genhtml_function_coverage=1 00:08:59.980 --rc genhtml_legend=1 00:08:59.980 --rc geninfo_all_blocks=1 00:08:59.980 --rc geninfo_unexecuted_blocks=1 00:08:59.980 00:08:59.980 ' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:59.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.980 --rc genhtml_branch_coverage=1 00:08:59.980 --rc genhtml_function_coverage=1 00:08:59.980 --rc genhtml_legend=1 00:08:59.980 --rc geninfo_all_blocks=1 00:08:59.980 --rc geninfo_unexecuted_blocks=1 00:08:59.980 00:08:59.980 ' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:59.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.980 --rc genhtml_branch_coverage=1 00:08:59.980 --rc genhtml_function_coverage=1 00:08:59.980 --rc genhtml_legend=1 00:08:59.980 --rc geninfo_all_blocks=1 00:08:59.980 --rc geninfo_unexecuted_blocks=1 00:08:59.980 00:08:59.980 ' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:59.980 ************************************ 00:08:59.980 START TEST dd_invalid_arguments 00:08:59.980 ************************************ 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:59.980 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:59.981 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:00.240 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:00.240 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:00.240 00:09:00.240 CPU options: 00:09:00.240 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:00.240 (like [0,1,10]) 00:09:00.240 --lcores lcore to CPU mapping list. The list is in the format: 00:09:00.240 [<,lcores[@CPUs]>...] 00:09:00.240 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:00.240 Within the group, '-' is used for range separator, 00:09:00.240 ',' is used for single number separator. 00:09:00.240 '( )' can be omitted for single element group, 00:09:00.240 '@' can be omitted if cpus and lcores have the same value 00:09:00.240 --disable-cpumask-locks Disable CPU core lock files. 00:09:00.240 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:00.240 pollers in the app support interrupt mode) 00:09:00.240 -p, --main-core main (primary) core for DPDK 00:09:00.240 00:09:00.240 Configuration options: 00:09:00.240 -c, --config, --json JSON config file 00:09:00.240 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:00.240 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:00.241 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:00.241 --rpcs-allowed comma-separated list of permitted RPCS 00:09:00.241 --json-ignore-init-errors don't exit on invalid config entry 00:09:00.241 00:09:00.241 Memory options: 00:09:00.241 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:00.241 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:00.241 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:00.241 -R, --huge-unlink unlink huge files after initialization 00:09:00.241 -n, --mem-channels number of memory channels used for DPDK 00:09:00.241 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:00.241 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:00.241 --no-huge run without using hugepages 00:09:00.241 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:00.241 -i, --shm-id shared memory ID (optional) 00:09:00.241 -g, --single-file-segments force creating just one hugetlbfs file 00:09:00.241 00:09:00.241 PCI options: 00:09:00.241 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:00.241 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:00.241 -u, --no-pci disable PCI access 00:09:00.241 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:00.241 00:09:00.241 Log options: 00:09:00.241 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:00.241 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:00.241 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:00.241 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:00.241 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:09:00.241 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:09:00.241 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:09:00.241 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:09:00.241 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:00.241 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:09:00.241 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:09:00.241 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:09:00.241 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:00.241 --silence-noticelog disable notice level logging to stderr 00:09:00.241 00:09:00.241 Trace options: 00:09:00.241 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:00.241 [2024-12-16 01:30:30.670270] spdk_dd.c:1478:main: *ERROR*: Invalid arguments 00:09:00.241 setting 0 to disable trace (default 32768) 00:09:00.241 Tracepoints vary in size and can use more than one trace entry. 00:09:00.241 -e, --tpoint-group [:] 00:09:00.241 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:09:00.241 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:09:00.241 blob, bdev_raid, scheduler, all). 00:09:00.241 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:00.241 a tracepoint group. First tpoint inside a group can be enabled by 00:09:00.241 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:00.241 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:00.241 in /include/spdk_internal/trace_defs.h 00:09:00.241 00:09:00.241 Other options: 00:09:00.241 -h, --help show this usage 00:09:00.241 -v, --version print SPDK version 00:09:00.241 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:00.241 --env-context Opaque context for use of the env implementation 00:09:00.241 00:09:00.241 Application specific: 00:09:00.241 [--------- DD Options ---------] 00:09:00.241 --if Input file. Must specify either --if or --ib. 00:09:00.241 --ib Input bdev. Must specifier either --if or --ib 00:09:00.241 --of Output file. Must specify either --of or --ob. 00:09:00.241 --ob Output bdev. Must specify either --of or --ob. 00:09:00.241 --iflag Input file flags. 00:09:00.241 --oflag Output file flags. 00:09:00.241 --bs I/O unit size (default: 4096) 00:09:00.241 --qd Queue depth (default: 2) 00:09:00.241 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:00.241 --skip Skip this many I/O units at start of input. (default: 0) 00:09:00.241 --seek Skip this many I/O units at start of output. (default: 0) 00:09:00.241 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:00.241 --sparse Enable hole skipping in input target 00:09:00.241 Available iflag and oflag values: 00:09:00.241 append - append mode 00:09:00.241 direct - use direct I/O for data 00:09:00.241 directory - fail unless a directory 00:09:00.241 dsync - use synchronized I/O for data 00:09:00.241 noatime - do not update access time 00:09:00.241 noctty - do not assign controlling terminal from file 00:09:00.241 nofollow - do not follow symlinks 00:09:00.241 nonblock - use non-blocking I/O 00:09:00.241 sync - use synchronized I/O for data and metadata 00:09:00.241 ************************************ 00:09:00.241 END TEST dd_invalid_arguments 00:09:00.241 ************************************ 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.241 00:09:00.241 real 0m0.081s 00:09:00.241 user 0m0.052s 00:09:00.241 sys 0m0.027s 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.241 ************************************ 00:09:00.241 START TEST dd_double_input 00:09:00.241 ************************************ 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:00.241 [2024-12-16 01:30:30.801054] spdk_dd.c:1485:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.241 00:09:00.241 real 0m0.079s 00:09:00.241 user 0m0.041s 00:09:00.241 sys 0m0.036s 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.241 ************************************ 00:09:00.241 END TEST dd_double_input 00:09:00.241 ************************************ 00:09:00.241 01:30:30 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.242 ************************************ 00:09:00.242 START TEST dd_double_output 00:09:00.242 ************************************ 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.242 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:00.501 [2024-12-16 01:30:30.936258] spdk_dd.c:1491:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:00.501 ************************************ 00:09:00.501 END TEST dd_double_output 00:09:00.501 ************************************ 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.501 00:09:00.501 real 0m0.081s 00:09:00.501 user 0m0.054s 00:09:00.501 sys 0m0.026s 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.501 01:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.501 ************************************ 00:09:00.501 START TEST dd_no_input 00:09:00.501 ************************************ 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.501 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:00.502 [2024-12-16 01:30:31.074248] spdk_dd.c:1497:main: *ERROR*: You must specify either --if or --ib 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.502 00:09:00.502 real 0m0.084s 00:09:00.502 user 0m0.052s 00:09:00.502 sys 0m0.030s 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.502 ************************************ 00:09:00.502 END TEST dd_no_input 00:09:00.502 ************************************ 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.502 ************************************ 00:09:00.502 START TEST dd_no_output 00:09:00.502 ************************************ 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.502 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:00.761 [2024-12-16 01:30:31.210774] spdk_dd.c:1503:main: *ERROR*: You must specify either --of or --ob 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.761 00:09:00.761 real 0m0.082s 00:09:00.761 user 0m0.057s 00:09:00.761 sys 0m0.024s 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:00.761 ************************************ 00:09:00.761 END TEST dd_no_output 00:09:00.761 ************************************ 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.761 ************************************ 00:09:00.761 START TEST dd_wrong_blocksize 00:09:00.761 ************************************ 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:00.761 [2024-12-16 01:30:31.340786] spdk_dd.c:1509:main: *ERROR*: Invalid --bs value 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:00.761 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:00.761 00:09:00.761 real 0m0.079s 00:09:00.761 user 0m0.051s 00:09:00.761 sys 0m0.026s 00:09:00.761 ************************************ 00:09:00.762 END TEST dd_wrong_blocksize 00:09:00.762 ************************************ 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:00.762 ************************************ 00:09:00.762 START TEST dd_smaller_blocksize 00:09:00.762 ************************************ 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:00.762 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:01.021 [2024-12-16 01:30:31.471229] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:01.021 [2024-12-16 01:30:31.471482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76497 ] 00:09:01.021 [2024-12-16 01:30:31.619649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.021 [2024-12-16 01:30:31.645189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.280 [2024-12-16 01:30:31.681512] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.280 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:01.280 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:01.280 [2024-12-16 01:30:31.702623] spdk_dd.c:1182:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:01.280 [2024-12-16 01:30:31.702657] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.280 [2024-12-16 01:30:31.779124] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:01.280 ************************************ 00:09:01.280 END TEST dd_smaller_blocksize 00:09:01.280 ************************************ 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.280 00:09:01.280 real 0m0.426s 00:09:01.280 user 0m0.211s 00:09:01.280 sys 0m0.111s 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.280 ************************************ 00:09:01.280 START TEST dd_invalid_count 00:09:01.280 ************************************ 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.280 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.281 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.281 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.281 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.281 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:01.540 [2024-12-16 01:30:31.952041] spdk_dd.c:1515:main: *ERROR*: Invalid --count value 00:09:01.540 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:01.540 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.540 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.540 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.540 00:09:01.540 real 0m0.083s 00:09:01.540 user 0m0.050s 00:09:01.540 sys 0m0.031s 00:09:01.540 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.540 ************************************ 00:09:01.540 END TEST dd_invalid_count 00:09:01.540 ************************************ 00:09:01.540 01:30:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.540 ************************************ 00:09:01.540 START TEST dd_invalid_oflag 00:09:01.540 ************************************ 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:01.540 [2024-12-16 01:30:32.089680] spdk_dd.c:1521:main: *ERROR*: --oflags may be used only with --of 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.540 ************************************ 00:09:01.540 END TEST dd_invalid_oflag 00:09:01.540 ************************************ 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.540 00:09:01.540 real 0m0.083s 00:09:01.540 user 0m0.051s 00:09:01.540 sys 0m0.031s 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.540 ************************************ 00:09:01.540 START TEST dd_invalid_iflag 00:09:01.540 ************************************ 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.540 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.541 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:01.800 [2024-12-16 01:30:32.223760] spdk_dd.c:1527:main: *ERROR*: --iflags may be used only with --if 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:01.800 ************************************ 00:09:01.800 END TEST dd_invalid_iflag 00:09:01.800 ************************************ 00:09:01.800 00:09:01.800 real 0m0.084s 00:09:01.800 user 0m0.050s 00:09:01.800 sys 0m0.032s 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:01.800 ************************************ 00:09:01.800 START TEST dd_unknown_flag 00:09:01.800 ************************************ 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:01.800 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:01.800 [2024-12-16 01:30:32.414633] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:01.800 [2024-12-16 01:30:32.414757] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76588 ] 00:09:02.059 [2024-12-16 01:30:32.567494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.059 [2024-12-16 01:30:32.593693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.059 [2024-12-16 01:30:32.629403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.059 [2024-12-16 01:30:32.648824] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:02.059 [2024-12-16 01:30:32.648907] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.059 [2024-12-16 01:30:32.648982] spdk_dd.c: 984:parse_flags: *ERROR*: Unknown file flag: -1 00:09:02.059 [2024-12-16 01:30:32.648998] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.059 [2024-12-16 01:30:32.649261] spdk_dd.c:1216:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:09:02.059 [2024-12-16 01:30:32.649281] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.059 [2024-12-16 01:30:32.649343] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:02.059 [2024-12-16 01:30:32.649356] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:09:02.319 [2024-12-16 01:30:32.721484] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:09:02.319 ************************************ 00:09:02.319 END TEST dd_unknown_flag 00:09:02.319 ************************************ 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:02.319 00:09:02.319 real 0m0.484s 00:09:02.319 user 0m0.259s 00:09:02.319 sys 0m0.131s 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:02.319 ************************************ 00:09:02.319 START TEST dd_invalid_json 00:09:02.319 ************************************ 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:02.319 01:30:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:02.319 [2024-12-16 01:30:32.903176] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:02.319 [2024-12-16 01:30:32.903298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76618 ] 00:09:02.578 [2024-12-16 01:30:33.057754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.578 [2024-12-16 01:30:33.082447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.578 [2024-12-16 01:30:33.082758] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:09:02.578 [2024-12-16 01:30:33.082785] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:02.578 [2024-12-16 01:30:33.082797] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:02.578 [2024-12-16 01:30:33.082843] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:02.578 ************************************ 00:09:02.578 END TEST dd_invalid_json 00:09:02.578 ************************************ 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:02.578 00:09:02.578 real 0m0.302s 00:09:02.578 user 0m0.141s 00:09:02.578 sys 0m0.058s 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:02.578 ************************************ 00:09:02.578 START TEST dd_invalid_seek 00:09:02.578 ************************************ 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:09:02.578 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:02.579 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:09:02.838 { 00:09:02.838 "subsystems": [ 00:09:02.838 { 00:09:02.838 "subsystem": "bdev", 00:09:02.838 "config": [ 00:09:02.838 { 00:09:02.838 "params": { 00:09:02.838 "block_size": 512, 00:09:02.838 "num_blocks": 512, 00:09:02.838 "name": "malloc0" 00:09:02.838 }, 00:09:02.838 "method": "bdev_malloc_create" 00:09:02.838 }, 00:09:02.838 { 00:09:02.838 "params": { 00:09:02.838 "block_size": 512, 00:09:02.838 "num_blocks": 512, 00:09:02.838 "name": "malloc1" 00:09:02.838 }, 00:09:02.838 "method": "bdev_malloc_create" 00:09:02.838 }, 00:09:02.838 { 00:09:02.838 "method": "bdev_wait_for_examine" 00:09:02.838 } 00:09:02.838 ] 00:09:02.838 } 00:09:02.838 ] 00:09:02.838 } 00:09:02.838 [2024-12-16 01:30:33.265846] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:02.838 [2024-12-16 01:30:33.266086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76642 ] 00:09:02.838 [2024-12-16 01:30:33.420491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.838 [2024-12-16 01:30:33.446243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.838 [2024-12-16 01:30:33.483431] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.116 [2024-12-16 01:30:33.530922] spdk_dd.c:1143:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:09:03.116 [2024-12-16 01:30:33.531206] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.116 [2024-12-16 01:30:33.610323] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.116 00:09:03.116 real 0m0.467s 00:09:03.116 user 0m0.284s 00:09:03.116 sys 0m0.144s 00:09:03.116 ************************************ 00:09:03.116 END TEST dd_invalid_seek 00:09:03.116 ************************************ 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.116 01:30:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:03.116 ************************************ 00:09:03.116 START TEST dd_invalid_skip 00:09:03.116 ************************************ 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:03.117 01:30:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:09:03.383 [2024-12-16 01:30:33.782775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.383 [2024-12-16 01:30:33.782871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76681 ] 00:09:03.383 { 00:09:03.383 "subsystems": [ 00:09:03.383 { 00:09:03.383 "subsystem": "bdev", 00:09:03.383 "config": [ 00:09:03.383 { 00:09:03.383 "params": { 00:09:03.383 "block_size": 512, 00:09:03.383 "num_blocks": 512, 00:09:03.383 "name": "malloc0" 00:09:03.383 }, 00:09:03.383 "method": "bdev_malloc_create" 00:09:03.383 }, 00:09:03.383 { 00:09:03.383 "params": { 00:09:03.383 "block_size": 512, 00:09:03.383 "num_blocks": 512, 00:09:03.383 "name": "malloc1" 00:09:03.383 }, 00:09:03.383 "method": "bdev_malloc_create" 00:09:03.383 }, 00:09:03.383 { 00:09:03.383 "method": "bdev_wait_for_examine" 00:09:03.383 } 00:09:03.383 ] 00:09:03.383 } 00:09:03.383 ] 00:09:03.383 } 00:09:03.383 [2024-12-16 01:30:33.932167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.383 [2024-12-16 01:30:33.960562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.383 [2024-12-16 01:30:34.000119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.643 [2024-12-16 01:30:34.044927] spdk_dd.c:1100:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:09:03.643 [2024-12-16 01:30:34.044999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:03.643 [2024-12-16 01:30:34.112697] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:03.643 00:09:03.643 real 0m0.443s 00:09:03.643 user 0m0.274s 00:09:03.643 sys 0m0.132s 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.643 ************************************ 00:09:03.643 END TEST dd_invalid_skip 00:09:03.643 ************************************ 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:03.643 ************************************ 00:09:03.643 START TEST dd_invalid_input_count 00:09:03.643 ************************************ 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:03.643 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:09:03.643 [2024-12-16 01:30:34.275059] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.643 [2024-12-16 01:30:34.275154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76709 ] 00:09:03.643 { 00:09:03.643 "subsystems": [ 00:09:03.643 { 00:09:03.643 "subsystem": "bdev", 00:09:03.643 "config": [ 00:09:03.643 { 00:09:03.643 "params": { 00:09:03.643 "block_size": 512, 00:09:03.643 "num_blocks": 512, 00:09:03.643 "name": "malloc0" 00:09:03.643 }, 00:09:03.643 "method": "bdev_malloc_create" 00:09:03.643 }, 00:09:03.643 { 00:09:03.643 "params": { 00:09:03.643 "block_size": 512, 00:09:03.643 "num_blocks": 512, 00:09:03.643 "name": "malloc1" 00:09:03.643 }, 00:09:03.643 "method": "bdev_malloc_create" 00:09:03.643 }, 00:09:03.643 { 00:09:03.643 "method": "bdev_wait_for_examine" 00:09:03.643 } 00:09:03.643 ] 00:09:03.643 } 00:09:03.643 ] 00:09:03.643 } 00:09:03.903 [2024-12-16 01:30:34.425145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.903 [2024-12-16 01:30:34.446564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.903 [2024-12-16 01:30:34.475326] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:03.903 [2024-12-16 01:30:34.517084] spdk_dd.c:1108:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:09:03.903 [2024-12-16 01:30:34.517163] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.162 [2024-12-16 01:30:34.581000] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.162 00:09:04.162 real 0m0.415s 00:09:04.162 user 0m0.272s 00:09:04.162 sys 0m0.105s 00:09:04.162 ************************************ 00:09:04.162 END TEST dd_invalid_input_count 00:09:04.162 ************************************ 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:04.162 ************************************ 00:09:04.162 START TEST dd_invalid_output_count 00:09:04.162 ************************************ 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:09:04.162 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:04.163 01:30:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:09:04.163 { 00:09:04.163 "subsystems": [ 00:09:04.163 { 00:09:04.163 "subsystem": "bdev", 00:09:04.163 "config": [ 00:09:04.163 { 00:09:04.163 "params": { 00:09:04.163 "block_size": 512, 00:09:04.163 "num_blocks": 512, 00:09:04.163 "name": "malloc0" 00:09:04.163 }, 00:09:04.163 "method": "bdev_malloc_create" 00:09:04.163 }, 00:09:04.163 { 00:09:04.163 "method": "bdev_wait_for_examine" 00:09:04.163 } 00:09:04.163 ] 00:09:04.163 } 00:09:04.163 ] 00:09:04.163 } 00:09:04.163 [2024-12-16 01:30:34.743762] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:04.163 [2024-12-16 01:30:34.743852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76748 ] 00:09:04.422 [2024-12-16 01:30:34.890513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.422 [2024-12-16 01:30:34.909760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.422 [2024-12-16 01:30:34.938214] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.422 [2024-12-16 01:30:34.972353] spdk_dd.c:1150:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:09:04.422 [2024-12-16 01:30:34.972423] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.422 [2024-12-16 01:30:35.037335] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.680 00:09:04.680 real 0m0.405s 00:09:04.680 user 0m0.261s 00:09:04.680 sys 0m0.098s 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.680 ************************************ 00:09:04.680 END TEST dd_invalid_output_count 00:09:04.680 ************************************ 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:04.680 ************************************ 00:09:04.680 START TEST dd_bs_not_multiple 00:09:04.680 ************************************ 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:04.680 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:09:04.680 { 00:09:04.680 "subsystems": [ 00:09:04.680 { 00:09:04.680 "subsystem": "bdev", 00:09:04.680 "config": [ 00:09:04.680 { 00:09:04.680 "params": { 00:09:04.680 "block_size": 512, 00:09:04.680 "num_blocks": 512, 00:09:04.680 "name": "malloc0" 00:09:04.680 }, 00:09:04.680 "method": "bdev_malloc_create" 00:09:04.680 }, 00:09:04.680 { 00:09:04.680 "params": { 00:09:04.680 "block_size": 512, 00:09:04.680 "num_blocks": 512, 00:09:04.680 "name": "malloc1" 00:09:04.680 }, 00:09:04.680 "method": "bdev_malloc_create" 00:09:04.680 }, 00:09:04.680 { 00:09:04.680 "method": "bdev_wait_for_examine" 00:09:04.680 } 00:09:04.680 ] 00:09:04.680 } 00:09:04.680 ] 00:09:04.680 } 00:09:04.680 [2024-12-16 01:30:35.203468] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:04.680 [2024-12-16 01:30:35.203755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76774 ] 00:09:04.938 [2024-12-16 01:30:35.354914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.938 [2024-12-16 01:30:35.376801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.938 [2024-12-16 01:30:35.408228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.938 [2024-12-16 01:30:35.452447] spdk_dd.c:1166:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:09:04.938 [2024-12-16 01:30:35.452519] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:04.938 [2024-12-16 01:30:35.524363] spdk_dd.c:1534:main: *ERROR*: Error occurred while performing copy 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:04.938 00:09:04.938 real 0m0.440s 00:09:04.938 user 0m0.285s 00:09:04.938 sys 0m0.116s 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.938 ************************************ 00:09:04.938 END TEST dd_bs_not_multiple 00:09:04.938 01:30:35 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:09:04.938 ************************************ 00:09:05.197 ************************************ 00:09:05.197 END TEST spdk_dd_negative 00:09:05.197 ************************************ 00:09:05.197 00:09:05.197 real 0m5.228s 00:09:05.197 user 0m2.830s 00:09:05.197 sys 0m1.813s 00:09:05.197 01:30:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.197 01:30:35 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:05.197 ************************************ 00:09:05.197 END TEST spdk_dd 00:09:05.197 ************************************ 00:09:05.197 00:09:05.197 real 1m4.388s 00:09:05.197 user 0m40.795s 00:09:05.197 sys 0m27.883s 00:09:05.197 01:30:35 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.197 01:30:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:05.197 01:30:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:05.197 01:30:35 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:09:05.197 01:30:35 -- spdk/autotest.sh@260 -- # timing_exit lib 00:09:05.197 01:30:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.197 01:30:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.197 01:30:35 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:09:05.197 01:30:35 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:09:05.197 01:30:35 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:09:05.197 01:30:35 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:09:05.197 01:30:35 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:09:05.197 01:30:35 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:09:05.197 01:30:35 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:05.197 01:30:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.197 01:30:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.197 01:30:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.197 ************************************ 00:09:05.197 START TEST nvmf_tcp 00:09:05.197 ************************************ 00:09:05.197 01:30:35 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:05.197 * Looking for test storage... 00:09:05.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:05.197 01:30:35 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:05.197 01:30:35 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:05.197 01:30:35 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:05.456 01:30:35 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:05.456 01:30:35 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.456 01:30:35 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.456 01:30:35 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.456 01:30:35 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.457 01:30:35 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:05.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.457 --rc genhtml_branch_coverage=1 00:09:05.457 --rc genhtml_function_coverage=1 00:09:05.457 --rc genhtml_legend=1 00:09:05.457 --rc geninfo_all_blocks=1 00:09:05.457 --rc geninfo_unexecuted_blocks=1 00:09:05.457 00:09:05.457 ' 00:09:05.457 01:30:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:05.457 01:30:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:05.457 01:30:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.457 01:30:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:05.457 ************************************ 00:09:05.457 START TEST nvmf_target_core 00:09:05.457 ************************************ 00:09:05.457 01:30:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:05.457 * Looking for test storage... 00:09:05.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:05.457 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:05.457 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:09:05.457 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:05.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.717 --rc genhtml_branch_coverage=1 00:09:05.717 --rc genhtml_function_coverage=1 00:09:05.717 --rc genhtml_legend=1 00:09:05.717 --rc geninfo_all_blocks=1 00:09:05.717 --rc geninfo_unexecuted_blocks=1 00:09:05.717 00:09:05.717 ' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:05.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.717 --rc genhtml_branch_coverage=1 00:09:05.717 --rc genhtml_function_coverage=1 00:09:05.717 --rc genhtml_legend=1 00:09:05.717 --rc geninfo_all_blocks=1 00:09:05.717 --rc geninfo_unexecuted_blocks=1 00:09:05.717 00:09:05.717 ' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:05.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.717 --rc genhtml_branch_coverage=1 00:09:05.717 --rc genhtml_function_coverage=1 00:09:05.717 --rc genhtml_legend=1 00:09:05.717 --rc geninfo_all_blocks=1 00:09:05.717 --rc geninfo_unexecuted_blocks=1 00:09:05.717 00:09:05.717 ' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:05.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.717 --rc genhtml_branch_coverage=1 00:09:05.717 --rc genhtml_function_coverage=1 00:09:05.717 --rc genhtml_legend=1 00:09:05.717 --rc geninfo_all_blocks=1 00:09:05.717 --rc geninfo_unexecuted_blocks=1 00:09:05.717 00:09:05.717 ' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.717 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.718 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.718 ************************************ 00:09:05.718 START TEST nvmf_host_management 00:09:05.718 ************************************ 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:05.718 * Looking for test storage... 00:09:05.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:09:05.718 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.978 --rc genhtml_branch_coverage=1 00:09:05.978 --rc genhtml_function_coverage=1 00:09:05.978 --rc genhtml_legend=1 00:09:05.978 --rc geninfo_all_blocks=1 00:09:05.978 --rc geninfo_unexecuted_blocks=1 00:09:05.978 00:09:05.978 ' 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.978 --rc genhtml_branch_coverage=1 00:09:05.978 --rc genhtml_function_coverage=1 00:09:05.978 --rc genhtml_legend=1 00:09:05.978 --rc geninfo_all_blocks=1 00:09:05.978 --rc geninfo_unexecuted_blocks=1 00:09:05.978 00:09:05.978 ' 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.978 --rc genhtml_branch_coverage=1 00:09:05.978 --rc genhtml_function_coverage=1 00:09:05.978 --rc genhtml_legend=1 00:09:05.978 --rc geninfo_all_blocks=1 00:09:05.978 --rc geninfo_unexecuted_blocks=1 00:09:05.978 00:09:05.978 ' 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:05.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.978 --rc genhtml_branch_coverage=1 00:09:05.978 --rc genhtml_function_coverage=1 00:09:05.978 --rc genhtml_legend=1 00:09:05.978 --rc geninfo_all_blocks=1 00:09:05.978 --rc geninfo_unexecuted_blocks=1 00:09:05.978 00:09:05.978 ' 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.978 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:05.979 Cannot find device "nvmf_init_br" 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:05.979 Cannot find device "nvmf_init_br2" 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:05.979 Cannot find device "nvmf_tgt_br" 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.979 Cannot find device "nvmf_tgt_br2" 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:05.979 Cannot find device "nvmf_init_br" 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:05.979 Cannot find device "nvmf_init_br2" 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:05.979 Cannot find device "nvmf_tgt_br" 00:09:05.979 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:05.980 Cannot find device "nvmf_tgt_br2" 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:05.980 Cannot find device "nvmf_br" 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:05.980 Cannot find device "nvmf_init_if" 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:05.980 Cannot find device "nvmf_init_if2" 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:05.980 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:06.239 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:06.240 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:06.240 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:06.240 00:09:06.240 --- 10.0.0.3 ping statistics --- 00:09:06.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.240 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:06.240 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:06.240 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:09:06.240 00:09:06.240 --- 10.0.0.4 ping statistics --- 00:09:06.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.240 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:06.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:09:06.240 00:09:06.240 --- 10.0.0.1 ping statistics --- 00:09:06.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.240 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:06.240 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:06.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:09:06.499 00:09:06.499 --- 10.0.0.2 ping statistics --- 00:09:06.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.499 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=77124 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 77124 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 77124 ']' 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.499 01:30:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.499 [2024-12-16 01:30:37.000810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:06.499 [2024-12-16 01:30:37.000898] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.759 [2024-12-16 01:30:37.155610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.759 [2024-12-16 01:30:37.183434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.759 [2024-12-16 01:30:37.183781] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.759 [2024-12-16 01:30:37.183955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.759 [2024-12-16 01:30:37.184102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.759 [2024-12-16 01:30:37.184155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.759 [2024-12-16 01:30:37.185128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.759 [2024-12-16 01:30:37.185259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.759 [2024-12-16 01:30:37.185386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:06.759 [2024-12-16 01:30:37.185391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.759 [2024-12-16 01:30:37.220206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.759 [2024-12-16 01:30:37.316844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.759 Malloc0 00:09:06.759 [2024-12-16 01:30:37.388243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.759 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=77165 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 77165 /var/tmp/bdevperf.sock 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 77165 ']' 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:07.018 { 00:09:07.018 "params": { 00:09:07.018 "name": "Nvme$subsystem", 00:09:07.018 "trtype": "$TEST_TRANSPORT", 00:09:07.018 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:07.018 "adrfam": "ipv4", 00:09:07.018 "trsvcid": "$NVMF_PORT", 00:09:07.018 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:07.018 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:07.018 "hdgst": ${hdgst:-false}, 00:09:07.018 "ddgst": ${ddgst:-false} 00:09:07.018 }, 00:09:07.018 "method": "bdev_nvme_attach_controller" 00:09:07.018 } 00:09:07.018 EOF 00:09:07.018 )") 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:07.018 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:07.018 "params": { 00:09:07.018 "name": "Nvme0", 00:09:07.018 "trtype": "tcp", 00:09:07.018 "traddr": "10.0.0.3", 00:09:07.018 "adrfam": "ipv4", 00:09:07.018 "trsvcid": "4420", 00:09:07.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:07.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:07.018 "hdgst": false, 00:09:07.018 "ddgst": false 00:09:07.018 }, 00:09:07.018 "method": "bdev_nvme_attach_controller" 00:09:07.018 }' 00:09:07.018 [2024-12-16 01:30:37.495660] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:07.018 [2024-12-16 01:30:37.495741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77165 ] 00:09:07.018 [2024-12-16 01:30:37.649289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.277 [2024-12-16 01:30:37.679790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.277 [2024-12-16 01:30:37.731613] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.277 Running I/O for 10 seconds... 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:07.277 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:07.278 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:07.278 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:07.278 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:07.278 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:07.278 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.278 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.278 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:07.537 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.537 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:07.537 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:07.537 01:30:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.797 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.797 [2024-12-16 01:30:38.281170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.281745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.281818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.282011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.282032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.797 [2024-12-16 01:30:38.282043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.797 [2024-12-16 01:30:38.282052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.282532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.282836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.798 [2024-12-16 01:30:38.283418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.798 [2024-12-16 01:30:38.283428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:07.799 [2024-12-16 01:30:38.283727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2099590 is same with the state(6) to be set 00:09:07.799 [2024-12-16 01:30:38.283895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:07.799 [2024-12-16 01:30:38.283913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:07.799 [2024-12-16 01:30:38.283947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:07.799 [2024-12-16 01:30:38.283966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:07.799 [2024-12-16 01:30:38.283984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.283993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202fdd0 is same with the state(6) to be set 00:09:07.799 [2024-12-16 01:30:38.285165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:07.799 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.799 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:07.799 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.799 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:07.799 task offset: 89088 on job bdev=Nvme0n1 fails 00:09:07.799 00:09:07.799 Latency(us) 00:09:07.799 [2024-12-16T01:30:38.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.799 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:07.799 Job: Nvme0n1 ended in about 0.44 seconds with error 00:09:07.799 Verification LBA range: start 0x0 length 0x400 00:09:07.799 Nvme0n1 : 0.44 1450.16 90.64 145.02 0.00 38540.29 3291.69 43134.60 00:09:07.799 [2024-12-16T01:30:38.457Z] =================================================================================================================== 00:09:07.799 [2024-12-16T01:30:38.457Z] Total : 1450.16 90.64 145.02 0.00 38540.29 3291.69 43134.60 00:09:07.799 [2024-12-16 01:30:38.287222] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:07.799 [2024-12-16 01:30:38.287246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202fdd0 (9): Bad file descriptor 00:09:07.799 [2024-12-16 01:30:38.288672] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:09:07.799 [2024-12-16 01:30:38.288769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:09:07.799 [2024-12-16 01:30:38.288809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:07.799 [2024-12-16 01:30:38.288824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:09:07.799 [2024-12-16 01:30:38.288834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:09:07.799 [2024-12-16 01:30:38.288858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:09:07.799 [2024-12-16 01:30:38.288882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x202fdd0 00:09:07.799 [2024-12-16 01:30:38.288916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202fdd0 (9): Bad file descriptor 00:09:07.799 [2024-12-16 01:30:38.288934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:09:07.799 [2024-12-16 01:30:38.288943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:09:07.799 [2024-12-16 01:30:38.288953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:09:07.799 [2024-12-16 01:30:38.288963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:09:07.799 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.799 01:30:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 77165 00:09:08.736 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77165) - No such process 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:08.736 { 00:09:08.736 "params": { 00:09:08.736 "name": "Nvme$subsystem", 00:09:08.736 "trtype": "$TEST_TRANSPORT", 00:09:08.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:08.736 "adrfam": "ipv4", 00:09:08.736 "trsvcid": "$NVMF_PORT", 00:09:08.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:08.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:08.736 "hdgst": ${hdgst:-false}, 00:09:08.736 "ddgst": ${ddgst:-false} 00:09:08.736 }, 00:09:08.736 "method": "bdev_nvme_attach_controller" 00:09:08.736 } 00:09:08.736 EOF 00:09:08.736 )") 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:08.736 01:30:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:08.736 "params": { 00:09:08.736 "name": "Nvme0", 00:09:08.736 "trtype": "tcp", 00:09:08.736 "traddr": "10.0.0.3", 00:09:08.736 "adrfam": "ipv4", 00:09:08.736 "trsvcid": "4420", 00:09:08.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:08.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:08.736 "hdgst": false, 00:09:08.736 "ddgst": false 00:09:08.736 }, 00:09:08.736 "method": "bdev_nvme_attach_controller" 00:09:08.736 }' 00:09:08.736 [2024-12-16 01:30:39.351938] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:08.736 [2024-12-16 01:30:39.352030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77205 ] 00:09:08.995 [2024-12-16 01:30:39.498688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.995 [2024-12-16 01:30:39.518351] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.995 [2024-12-16 01:30:39.554502] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.253 Running I/O for 1 seconds... 00:09:10.189 1600.00 IOPS, 100.00 MiB/s 00:09:10.189 Latency(us) 00:09:10.189 [2024-12-16T01:30:40.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.189 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:10.189 Verification LBA range: start 0x0 length 0x400 00:09:10.189 Nvme0n1 : 1.01 1652.01 103.25 0.00 0.00 37961.89 3574.69 38368.35 00:09:10.189 [2024-12-16T01:30:40.847Z] =================================================================================================================== 00:09:10.189 [2024-12-16T01:30:40.847Z] Total : 1652.01 103.25 0.00 0.00 37961.89 3574.69 38368.35 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.189 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.448 rmmod nvme_tcp 00:09:10.448 rmmod nvme_fabrics 00:09:10.448 rmmod nvme_keyring 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 77124 ']' 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 77124 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 77124 ']' 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 77124 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77124 00:09:10.448 killing process with pid 77124 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77124' 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 77124 00:09:10.448 01:30:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 77124 00:09:10.448 [2024-12-16 01:30:41.031415] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:10.448 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:10.706 00:09:10.706 real 0m5.113s 00:09:10.706 user 0m17.760s 00:09:10.706 sys 0m1.367s 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.706 ************************************ 00:09:10.706 END TEST nvmf_host_management 00:09:10.706 ************************************ 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.706 01:30:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.965 ************************************ 00:09:10.965 START TEST nvmf_lvol 00:09:10.965 ************************************ 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:10.965 * Looking for test storage... 00:09:10.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.965 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.966 --rc genhtml_branch_coverage=1 00:09:10.966 --rc genhtml_function_coverage=1 00:09:10.966 --rc genhtml_legend=1 00:09:10.966 --rc geninfo_all_blocks=1 00:09:10.966 --rc geninfo_unexecuted_blocks=1 00:09:10.966 00:09:10.966 ' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.966 --rc genhtml_branch_coverage=1 00:09:10.966 --rc genhtml_function_coverage=1 00:09:10.966 --rc genhtml_legend=1 00:09:10.966 --rc geninfo_all_blocks=1 00:09:10.966 --rc geninfo_unexecuted_blocks=1 00:09:10.966 00:09:10.966 ' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.966 --rc genhtml_branch_coverage=1 00:09:10.966 --rc genhtml_function_coverage=1 00:09:10.966 --rc genhtml_legend=1 00:09:10.966 --rc geninfo_all_blocks=1 00:09:10.966 --rc geninfo_unexecuted_blocks=1 00:09:10.966 00:09:10.966 ' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:10.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.966 --rc genhtml_branch_coverage=1 00:09:10.966 --rc genhtml_function_coverage=1 00:09:10.966 --rc genhtml_legend=1 00:09:10.966 --rc geninfo_all_blocks=1 00:09:10.966 --rc geninfo_unexecuted_blocks=1 00:09:10.966 00:09:10.966 ' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.966 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:10.966 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:10.967 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:10.967 Cannot find device "nvmf_init_br" 00:09:11.225 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:09:11.225 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:11.226 Cannot find device "nvmf_init_br2" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:11.226 Cannot find device "nvmf_tgt_br" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.226 Cannot find device "nvmf_tgt_br2" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:11.226 Cannot find device "nvmf_init_br" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:11.226 Cannot find device "nvmf_init_br2" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:11.226 Cannot find device "nvmf_tgt_br" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:11.226 Cannot find device "nvmf_tgt_br2" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:11.226 Cannot find device "nvmf_br" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:11.226 Cannot find device "nvmf_init_if" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:11.226 Cannot find device "nvmf_init_if2" 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:11.226 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:11.485 01:30:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:11.485 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:11.485 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:09:11.485 00:09:11.485 --- 10.0.0.3 ping statistics --- 00:09:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.485 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:11.485 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:11.485 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:09:11.485 00:09:11.485 --- 10.0.0.4 ping statistics --- 00:09:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.485 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:11.485 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.485 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:09:11.485 00:09:11.485 --- 10.0.0.1 ping statistics --- 00:09:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.485 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:11.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:11.485 00:09:11.485 --- 10.0.0.2 ping statistics --- 00:09:11.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.485 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=77471 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 77471 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 77471 ']' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.485 01:30:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.485 [2024-12-16 01:30:42.131260] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:11.485 [2024-12-16 01:30:42.131379] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.744 [2024-12-16 01:30:42.289504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:11.744 [2024-12-16 01:30:42.313633] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.744 [2024-12-16 01:30:42.313690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.744 [2024-12-16 01:30:42.313704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.744 [2024-12-16 01:30:42.313714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.744 [2024-12-16 01:30:42.313724] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.744 [2024-12-16 01:30:42.314832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.744 [2024-12-16 01:30:42.315123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.744 [2024-12-16 01:30:42.315133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.744 [2024-12-16 01:30:42.349019] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.679 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.679 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:12.679 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.679 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.679 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:12.679 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.679 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:12.938 [2024-12-16 01:30:43.357735] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.938 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.196 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:13.196 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:13.454 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:13.454 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:13.713 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:13.972 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a435b6aa-0a11-4c60-a248-11771e3df6cd 00:09:13.972 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a435b6aa-0a11-4c60-a248-11771e3df6cd lvol 20 00:09:14.242 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=236e11c1-adcf-4102-bc56-19b8e84e82ec 00:09:14.242 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:14.513 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 236e11c1-adcf-4102-bc56-19b8e84e82ec 00:09:14.771 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:15.029 [2024-12-16 01:30:45.461490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:15.029 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:15.287 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=77541 00:09:15.287 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:15.287 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:16.223 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 236e11c1-adcf-4102-bc56-19b8e84e82ec MY_SNAPSHOT 00:09:16.481 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=8a582432-8b90-407f-9c16-dfd386a05ca8 00:09:16.481 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 236e11c1-adcf-4102-bc56-19b8e84e82ec 30 00:09:16.740 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 8a582432-8b90-407f-9c16-dfd386a05ca8 MY_CLONE 00:09:16.998 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9b8f1c75-d393-484a-8945-06f381ad625c 00:09:16.998 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9b8f1c75-d393-484a-8945-06f381ad625c 00:09:17.566 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 77541 00:09:25.686 Initializing NVMe Controllers 00:09:25.686 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:25.686 Controller IO queue size 128, less than required. 00:09:25.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:25.686 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:25.686 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:25.686 Initialization complete. Launching workers. 00:09:25.686 ======================================================== 00:09:25.686 Latency(us) 00:09:25.686 Device Information : IOPS MiB/s Average min max 00:09:25.686 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10668.17 41.67 12009.52 2047.88 104478.30 00:09:25.686 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10595.67 41.39 12090.31 2019.72 56076.61 00:09:25.686 ======================================================== 00:09:25.686 Total : 21263.84 83.06 12049.78 2019.72 104478.30 00:09:25.686 00:09:25.686 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:25.686 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 236e11c1-adcf-4102-bc56-19b8e84e82ec 00:09:25.944 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a435b6aa-0a11-4c60-a248-11771e3df6cd 00:09:26.202 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:26.202 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:26.202 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:26.461 rmmod nvme_tcp 00:09:26.461 rmmod nvme_fabrics 00:09:26.461 rmmod nvme_keyring 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 77471 ']' 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 77471 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 77471 ']' 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 77471 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77471 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.461 killing process with pid 77471 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77471' 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 77471 00:09:26.461 01:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 77471 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.720 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:26.980 00:09:26.980 real 0m16.019s 00:09:26.980 user 1m5.613s 00:09:26.980 sys 0m4.080s 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.980 ************************************ 00:09:26.980 END TEST nvmf_lvol 00:09:26.980 ************************************ 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.980 ************************************ 00:09:26.980 START TEST nvmf_lvs_grow 00:09:26.980 ************************************ 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:26.980 * Looking for test storage... 00:09:26.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.980 --rc genhtml_branch_coverage=1 00:09:26.980 --rc genhtml_function_coverage=1 00:09:26.980 --rc genhtml_legend=1 00:09:26.980 --rc geninfo_all_blocks=1 00:09:26.980 --rc geninfo_unexecuted_blocks=1 00:09:26.980 00:09:26.980 ' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.980 --rc genhtml_branch_coverage=1 00:09:26.980 --rc genhtml_function_coverage=1 00:09:26.980 --rc genhtml_legend=1 00:09:26.980 --rc geninfo_all_blocks=1 00:09:26.980 --rc geninfo_unexecuted_blocks=1 00:09:26.980 00:09:26.980 ' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.980 --rc genhtml_branch_coverage=1 00:09:26.980 --rc genhtml_function_coverage=1 00:09:26.980 --rc genhtml_legend=1 00:09:26.980 --rc geninfo_all_blocks=1 00:09:26.980 --rc geninfo_unexecuted_blocks=1 00:09:26.980 00:09:26.980 ' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:26.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.980 --rc genhtml_branch_coverage=1 00:09:26.980 --rc genhtml_function_coverage=1 00:09:26.980 --rc genhtml_legend=1 00:09:26.980 --rc geninfo_all_blocks=1 00:09:26.980 --rc geninfo_unexecuted_blocks=1 00:09:26.980 00:09:26.980 ' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.980 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.981 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.981 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:27.240 Cannot find device "nvmf_init_br" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:27.240 Cannot find device "nvmf_init_br2" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:27.240 Cannot find device "nvmf_tgt_br" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:27.240 Cannot find device "nvmf_tgt_br2" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:27.240 Cannot find device "nvmf_init_br" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:27.240 Cannot find device "nvmf_init_br2" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:27.240 Cannot find device "nvmf_tgt_br" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:27.240 Cannot find device "nvmf_tgt_br2" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:27.240 Cannot find device "nvmf_br" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:27.240 Cannot find device "nvmf_init_if" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:27.240 Cannot find device "nvmf_init_if2" 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:27.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:27.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:27.240 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:27.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:27.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:09:27.501 00:09:27.501 --- 10.0.0.3 ping statistics --- 00:09:27.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.501 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:09:27.501 01:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:27.501 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:27.501 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:09:27.501 00:09:27.501 --- 10.0.0.4 ping statistics --- 00:09:27.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.501 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:27.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:27.501 00:09:27.501 --- 10.0.0.1 ping statistics --- 00:09:27.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.501 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:27.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:09:27.501 00:09:27.501 --- 10.0.0.2 ping statistics --- 00:09:27.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.501 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=77921 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 77921 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 77921 ']' 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.501 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.501 [2024-12-16 01:30:58.105514] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:27.501 [2024-12-16 01:30:58.105637] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.760 [2024-12-16 01:30:58.255787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.760 [2024-12-16 01:30:58.277184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.760 [2024-12-16 01:30:58.277261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.760 [2024-12-16 01:30:58.277271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.760 [2024-12-16 01:30:58.277277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.760 [2024-12-16 01:30:58.277283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.760 [2024-12-16 01:30:58.277593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.760 [2024-12-16 01:30:58.304840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.760 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.760 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:27.760 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:27.760 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.760 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:27.760 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:27.760 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:28.019 [2024-12-16 01:30:58.608890] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.019 ************************************ 00:09:28.019 START TEST lvs_grow_clean 00:09:28.019 ************************************ 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:28.019 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:28.278 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:28.278 01:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:28.845 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:28.845 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:28.845 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.103 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.103 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.103 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c25eeaa9-cfba-49c5-82ec-22000cb2538e lvol 150 00:09:29.362 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=574dbbd1-293d-403a-a759-a2896dffc2bd 00:09:29.362 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.362 01:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:29.620 [2024-12-16 01:31:00.150363] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:29.620 [2024-12-16 01:31:00.150451] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:29.620 true 00:09:29.620 01:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:29.620 01:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:29.877 01:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:29.877 01:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.136 01:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 574dbbd1-293d-403a-a759-a2896dffc2bd 00:09:30.395 01:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:30.654 [2024-12-16 01:31:01.182894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.654 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:30.912 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78002 00:09:30.912 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:30.912 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:30.912 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78002 /var/tmp/bdevperf.sock 00:09:30.912 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 78002 ']' 00:09:30.912 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:30.912 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:30.913 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:30.913 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.913 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:30.913 [2024-12-16 01:31:01.487983] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:30.913 [2024-12-16 01:31:01.488074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78002 ] 00:09:31.171 [2024-12-16 01:31:01.633109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.171 [2024-12-16 01:31:01.658038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.171 [2024-12-16 01:31:01.692573] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.171 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.171 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:31.171 01:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:31.430 Nvme0n1 00:09:31.688 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:31.688 [ 00:09:31.688 { 00:09:31.688 "name": "Nvme0n1", 00:09:31.688 "aliases": [ 00:09:31.688 "574dbbd1-293d-403a-a759-a2896dffc2bd" 00:09:31.688 ], 00:09:31.688 "product_name": "NVMe disk", 00:09:31.688 "block_size": 4096, 00:09:31.688 "num_blocks": 38912, 00:09:31.688 "uuid": "574dbbd1-293d-403a-a759-a2896dffc2bd", 00:09:31.688 "numa_id": -1, 00:09:31.688 "assigned_rate_limits": { 00:09:31.688 "rw_ios_per_sec": 0, 00:09:31.688 "rw_mbytes_per_sec": 0, 00:09:31.688 "r_mbytes_per_sec": 0, 00:09:31.688 "w_mbytes_per_sec": 0 00:09:31.688 }, 00:09:31.688 "claimed": false, 00:09:31.688 "zoned": false, 00:09:31.688 "supported_io_types": { 00:09:31.688 "read": true, 00:09:31.688 "write": true, 00:09:31.688 "unmap": true, 00:09:31.688 "flush": true, 00:09:31.688 "reset": true, 00:09:31.688 "nvme_admin": true, 00:09:31.688 "nvme_io": true, 00:09:31.688 "nvme_io_md": false, 00:09:31.688 "write_zeroes": true, 00:09:31.688 "zcopy": false, 00:09:31.688 "get_zone_info": false, 00:09:31.688 "zone_management": false, 00:09:31.688 "zone_append": false, 00:09:31.688 "compare": true, 00:09:31.688 "compare_and_write": true, 00:09:31.688 "abort": true, 00:09:31.688 "seek_hole": false, 00:09:31.688 "seek_data": false, 00:09:31.688 "copy": true, 00:09:31.688 "nvme_iov_md": false 00:09:31.688 }, 00:09:31.688 "memory_domains": [ 00:09:31.688 { 00:09:31.688 "dma_device_id": "system", 00:09:31.688 "dma_device_type": 1 00:09:31.688 } 00:09:31.688 ], 00:09:31.688 "driver_specific": { 00:09:31.688 "nvme": [ 00:09:31.688 { 00:09:31.688 "trid": { 00:09:31.688 "trtype": "TCP", 00:09:31.688 "adrfam": "IPv4", 00:09:31.688 "traddr": "10.0.0.3", 00:09:31.688 "trsvcid": "4420", 00:09:31.688 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:31.688 }, 00:09:31.688 "ctrlr_data": { 00:09:31.688 "cntlid": 1, 00:09:31.688 "vendor_id": "0x8086", 00:09:31.688 "model_number": "SPDK bdev Controller", 00:09:31.688 "serial_number": "SPDK0", 00:09:31.688 "firmware_revision": "25.01", 00:09:31.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:31.688 "oacs": { 00:09:31.688 "security": 0, 00:09:31.688 "format": 0, 00:09:31.688 "firmware": 0, 00:09:31.688 "ns_manage": 0 00:09:31.688 }, 00:09:31.688 "multi_ctrlr": true, 00:09:31.688 "ana_reporting": false 00:09:31.688 }, 00:09:31.688 "vs": { 00:09:31.688 "nvme_version": "1.3" 00:09:31.688 }, 00:09:31.688 "ns_data": { 00:09:31.688 "id": 1, 00:09:31.688 "can_share": true 00:09:31.688 } 00:09:31.688 } 00:09:31.688 ], 00:09:31.688 "mp_policy": "active_passive" 00:09:31.688 } 00:09:31.688 } 00:09:31.689 ] 00:09:31.947 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:31.947 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78018 00:09:31.947 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:31.947 Running I/O for 10 seconds... 00:09:32.884 Latency(us) 00:09:32.884 [2024-12-16T01:31:03.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:32.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.884 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:32.884 [2024-12-16T01:31:03.542Z] =================================================================================================================== 00:09:32.884 [2024-12-16T01:31:03.542Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:32.884 00:09:33.819 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:33.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.819 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:33.819 [2024-12-16T01:31:04.477Z] =================================================================================================================== 00:09:33.819 [2024-12-16T01:31:04.477Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:33.819 00:09:34.078 true 00:09:34.078 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:34.078 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:34.644 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:34.644 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:34.644 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 78018 00:09:34.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.901 Nvme0n1 : 3.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:34.901 [2024-12-16T01:31:05.559Z] =================================================================================================================== 00:09:34.901 [2024-12-16T01:31:05.559Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:34.901 00:09:35.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.838 Nvme0n1 : 4.00 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:35.838 [2024-12-16T01:31:06.496Z] =================================================================================================================== 00:09:35.838 [2024-12-16T01:31:06.496Z] Total : 6572.25 25.67 0.00 0.00 0.00 0.00 0.00 00:09:35.838 00:09:37.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.255 Nvme0n1 : 5.00 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:09:37.255 [2024-12-16T01:31:07.913Z] =================================================================================================================== 00:09:37.255 [2024-12-16T01:31:07.913Z] Total : 6553.20 25.60 0.00 0.00 0.00 0.00 0.00 00:09:37.255 00:09:37.822 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.822 Nvme0n1 : 6.00 6498.17 25.38 0.00 0.00 0.00 0.00 0.00 00:09:37.822 [2024-12-16T01:31:08.480Z] =================================================================================================================== 00:09:37.822 [2024-12-16T01:31:08.480Z] Total : 6498.17 25.38 0.00 0.00 0.00 0.00 0.00 00:09:37.822 00:09:39.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.198 Nvme0n1 : 7.00 6531.43 25.51 0.00 0.00 0.00 0.00 0.00 00:09:39.198 [2024-12-16T01:31:09.856Z] =================================================================================================================== 00:09:39.198 [2024-12-16T01:31:09.856Z] Total : 6531.43 25.51 0.00 0.00 0.00 0.00 0.00 00:09:39.198 00:09:40.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.135 Nvme0n1 : 8.00 6417.38 25.07 0.00 0.00 0.00 0.00 0.00 00:09:40.135 [2024-12-16T01:31:10.794Z] =================================================================================================================== 00:09:40.136 [2024-12-16T01:31:10.794Z] Total : 6417.38 25.07 0.00 0.00 0.00 0.00 0.00 00:09:40.136 00:09:41.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.073 Nvme0n1 : 9.00 6381.67 24.93 0.00 0.00 0.00 0.00 0.00 00:09:41.073 [2024-12-16T01:31:11.731Z] =================================================================================================================== 00:09:41.073 [2024-12-16T01:31:11.731Z] Total : 6381.67 24.93 0.00 0.00 0.00 0.00 0.00 00:09:41.073 00:09:42.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.010 Nvme0n1 : 10.00 6365.80 24.87 0.00 0.00 0.00 0.00 0.00 00:09:42.010 [2024-12-16T01:31:12.668Z] =================================================================================================================== 00:09:42.010 [2024-12-16T01:31:12.668Z] Total : 6365.80 24.87 0.00 0.00 0.00 0.00 0.00 00:09:42.010 00:09:42.010 00:09:42.010 Latency(us) 00:09:42.010 [2024-12-16T01:31:12.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.010 Nvme0n1 : 10.01 6371.15 24.89 0.00 0.00 20084.75 11379.43 130595.37 00:09:42.010 [2024-12-16T01:31:12.668Z] =================================================================================================================== 00:09:42.010 [2024-12-16T01:31:12.668Z] Total : 6371.15 24.89 0.00 0.00 20084.75 11379.43 130595.37 00:09:42.010 { 00:09:42.010 "results": [ 00:09:42.010 { 00:09:42.010 "job": "Nvme0n1", 00:09:42.010 "core_mask": "0x2", 00:09:42.010 "workload": "randwrite", 00:09:42.010 "status": "finished", 00:09:42.010 "queue_depth": 128, 00:09:42.010 "io_size": 4096, 00:09:42.010 "runtime": 10.011698, 00:09:42.010 "iops": 6371.14703220173, 00:09:42.010 "mibps": 24.88729309453801, 00:09:42.010 "io_failed": 0, 00:09:42.010 "io_timeout": 0, 00:09:42.010 "avg_latency_us": 20084.746969041367, 00:09:42.010 "min_latency_us": 11379.432727272728, 00:09:42.010 "max_latency_us": 130595.37454545454 00:09:42.010 } 00:09:42.010 ], 00:09:42.010 "core_count": 1 00:09:42.010 } 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78002 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 78002 ']' 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 78002 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78002 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.010 killing process with pid 78002 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78002' 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 78002 00:09:42.010 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.010 00:09:42.010 Latency(us) 00:09:42.010 [2024-12-16T01:31:12.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.010 [2024-12-16T01:31:12.668Z] =================================================================================================================== 00:09:42.010 [2024-12-16T01:31:12.668Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 78002 00:09:42.010 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:42.269 01:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:42.835 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:42.835 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:42.835 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:42.835 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:42.835 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.094 [2024-12-16 01:31:13.709127] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:43.094 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:43.094 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:43.094 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:43.094 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.094 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.094 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.352 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.352 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.352 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.352 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.352 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:43.352 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:43.352 request: 00:09:43.352 { 00:09:43.352 "uuid": "c25eeaa9-cfba-49c5-82ec-22000cb2538e", 00:09:43.352 "method": "bdev_lvol_get_lvstores", 00:09:43.352 "req_id": 1 00:09:43.352 } 00:09:43.352 Got JSON-RPC error response 00:09:43.352 response: 00:09:43.352 { 00:09:43.352 "code": -19, 00:09:43.352 "message": "No such device" 00:09:43.352 } 00:09:43.610 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:43.610 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.611 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.611 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.611 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:43.869 aio_bdev 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 574dbbd1-293d-403a-a759-a2896dffc2bd 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=574dbbd1-293d-403a-a759-a2896dffc2bd 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:43.869 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 574dbbd1-293d-403a-a759-a2896dffc2bd -t 2000 00:09:44.128 [ 00:09:44.128 { 00:09:44.128 "name": "574dbbd1-293d-403a-a759-a2896dffc2bd", 00:09:44.128 "aliases": [ 00:09:44.128 "lvs/lvol" 00:09:44.128 ], 00:09:44.128 "product_name": "Logical Volume", 00:09:44.128 "block_size": 4096, 00:09:44.128 "num_blocks": 38912, 00:09:44.128 "uuid": "574dbbd1-293d-403a-a759-a2896dffc2bd", 00:09:44.128 "assigned_rate_limits": { 00:09:44.128 "rw_ios_per_sec": 0, 00:09:44.128 "rw_mbytes_per_sec": 0, 00:09:44.128 "r_mbytes_per_sec": 0, 00:09:44.128 "w_mbytes_per_sec": 0 00:09:44.128 }, 00:09:44.128 "claimed": false, 00:09:44.128 "zoned": false, 00:09:44.128 "supported_io_types": { 00:09:44.128 "read": true, 00:09:44.128 "write": true, 00:09:44.128 "unmap": true, 00:09:44.128 "flush": false, 00:09:44.128 "reset": true, 00:09:44.128 "nvme_admin": false, 00:09:44.128 "nvme_io": false, 00:09:44.128 "nvme_io_md": false, 00:09:44.128 "write_zeroes": true, 00:09:44.128 "zcopy": false, 00:09:44.128 "get_zone_info": false, 00:09:44.128 "zone_management": false, 00:09:44.128 "zone_append": false, 00:09:44.128 "compare": false, 00:09:44.128 "compare_and_write": false, 00:09:44.128 "abort": false, 00:09:44.128 "seek_hole": true, 00:09:44.128 "seek_data": true, 00:09:44.128 "copy": false, 00:09:44.128 "nvme_iov_md": false 00:09:44.128 }, 00:09:44.128 "driver_specific": { 00:09:44.128 "lvol": { 00:09:44.128 "lvol_store_uuid": "c25eeaa9-cfba-49c5-82ec-22000cb2538e", 00:09:44.128 "base_bdev": "aio_bdev", 00:09:44.128 "thin_provision": false, 00:09:44.128 "num_allocated_clusters": 38, 00:09:44.128 "snapshot": false, 00:09:44.128 "clone": false, 00:09:44.128 "esnap_clone": false 00:09:44.128 } 00:09:44.128 } 00:09:44.128 } 00:09:44.128 ] 00:09:44.128 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:44.128 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:44.128 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:44.387 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:44.387 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:44.387 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:44.646 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:44.646 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 574dbbd1-293d-403a-a759-a2896dffc2bd 00:09:44.904 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c25eeaa9-cfba-49c5-82ec-22000cb2538e 00:09:45.162 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.420 01:31:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.679 00:09:45.679 real 0m17.682s 00:09:45.679 user 0m16.725s 00:09:45.679 sys 0m2.335s 00:09:45.679 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.679 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:45.679 ************************************ 00:09:45.679 END TEST lvs_grow_clean 00:09:45.679 ************************************ 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.938 ************************************ 00:09:45.938 START TEST lvs_grow_dirty 00:09:45.938 ************************************ 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:45.938 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.197 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:46.197 01:31:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:46.455 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=13130557-1b20-4b03-ab59-186b2fe1c713 00:09:46.455 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:09:46.455 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:46.714 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:46.714 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:46.714 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 13130557-1b20-4b03-ab59-186b2fe1c713 lvol 150 00:09:46.972 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1e27216f-00a7-4242-ab4f-a34c997aa9bb 00:09:46.972 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:46.972 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:47.234 [2024-12-16 01:31:17.830460] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:47.234 [2024-12-16 01:31:17.830591] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:47.234 true 00:09:47.234 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:09:47.234 01:31:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:47.503 01:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:47.503 01:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.761 01:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1e27216f-00a7-4242-ab4f-a34c997aa9bb 00:09:48.328 01:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:48.328 [2024-12-16 01:31:18.971057] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.587 01:31:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78265 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78265 /var/tmp/bdevperf.sock 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 78265 ']' 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.587 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:48.846 [2024-12-16 01:31:19.281520] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:48.846 [2024-12-16 01:31:19.281622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78265 ] 00:09:48.846 [2024-12-16 01:31:19.418521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.846 [2024-12-16 01:31:19.439243] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.846 [2024-12-16 01:31:19.470928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.105 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.105 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:49.105 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:49.364 Nvme0n1 00:09:49.364 01:31:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:49.622 [ 00:09:49.622 { 00:09:49.622 "name": "Nvme0n1", 00:09:49.622 "aliases": [ 00:09:49.622 "1e27216f-00a7-4242-ab4f-a34c997aa9bb" 00:09:49.622 ], 00:09:49.622 "product_name": "NVMe disk", 00:09:49.622 "block_size": 4096, 00:09:49.622 "num_blocks": 38912, 00:09:49.622 "uuid": "1e27216f-00a7-4242-ab4f-a34c997aa9bb", 00:09:49.622 "numa_id": -1, 00:09:49.622 "assigned_rate_limits": { 00:09:49.622 "rw_ios_per_sec": 0, 00:09:49.622 "rw_mbytes_per_sec": 0, 00:09:49.622 "r_mbytes_per_sec": 0, 00:09:49.622 "w_mbytes_per_sec": 0 00:09:49.622 }, 00:09:49.622 "claimed": false, 00:09:49.622 "zoned": false, 00:09:49.622 "supported_io_types": { 00:09:49.622 "read": true, 00:09:49.622 "write": true, 00:09:49.622 "unmap": true, 00:09:49.622 "flush": true, 00:09:49.623 "reset": true, 00:09:49.623 "nvme_admin": true, 00:09:49.623 "nvme_io": true, 00:09:49.623 "nvme_io_md": false, 00:09:49.623 "write_zeroes": true, 00:09:49.623 "zcopy": false, 00:09:49.623 "get_zone_info": false, 00:09:49.623 "zone_management": false, 00:09:49.623 "zone_append": false, 00:09:49.623 "compare": true, 00:09:49.623 "compare_and_write": true, 00:09:49.623 "abort": true, 00:09:49.623 "seek_hole": false, 00:09:49.623 "seek_data": false, 00:09:49.623 "copy": true, 00:09:49.623 "nvme_iov_md": false 00:09:49.623 }, 00:09:49.623 "memory_domains": [ 00:09:49.623 { 00:09:49.623 "dma_device_id": "system", 00:09:49.623 "dma_device_type": 1 00:09:49.623 } 00:09:49.623 ], 00:09:49.623 "driver_specific": { 00:09:49.623 "nvme": [ 00:09:49.623 { 00:09:49.623 "trid": { 00:09:49.623 "trtype": "TCP", 00:09:49.623 "adrfam": "IPv4", 00:09:49.623 "traddr": "10.0.0.3", 00:09:49.623 "trsvcid": "4420", 00:09:49.623 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:49.623 }, 00:09:49.623 "ctrlr_data": { 00:09:49.623 "cntlid": 1, 00:09:49.623 "vendor_id": "0x8086", 00:09:49.623 "model_number": "SPDK bdev Controller", 00:09:49.623 "serial_number": "SPDK0", 00:09:49.623 "firmware_revision": "25.01", 00:09:49.623 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:49.623 "oacs": { 00:09:49.623 "security": 0, 00:09:49.623 "format": 0, 00:09:49.623 "firmware": 0, 00:09:49.623 "ns_manage": 0 00:09:49.623 }, 00:09:49.623 "multi_ctrlr": true, 00:09:49.623 "ana_reporting": false 00:09:49.623 }, 00:09:49.623 "vs": { 00:09:49.623 "nvme_version": "1.3" 00:09:49.623 }, 00:09:49.623 "ns_data": { 00:09:49.623 "id": 1, 00:09:49.623 "can_share": true 00:09:49.623 } 00:09:49.623 } 00:09:49.623 ], 00:09:49.623 "mp_policy": "active_passive" 00:09:49.623 } 00:09:49.623 } 00:09:49.623 ] 00:09:49.623 01:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78281 00:09:49.623 01:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:49.623 01:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:49.623 Running I/O for 10 seconds... 00:09:51.000 Latency(us) 00:09:51.000 [2024-12-16T01:31:21.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.000 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:51.000 [2024-12-16T01:31:21.658Z] =================================================================================================================== 00:09:51.000 [2024-12-16T01:31:21.658Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:51.000 00:09:51.567 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:09:51.825 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.825 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:51.825 [2024-12-16T01:31:22.483Z] =================================================================================================================== 00:09:51.825 [2024-12-16T01:31:22.483Z] Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:51.825 00:09:51.825 true 00:09:52.084 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:09:52.084 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:52.343 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:52.343 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:52.343 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 78281 00:09:52.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.601 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:52.601 [2024-12-16T01:31:23.259Z] =================================================================================================================== 00:09:52.601 [2024-12-16T01:31:23.259Z] Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:52.601 00:09:53.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.976 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:53.976 [2024-12-16T01:31:24.634Z] =================================================================================================================== 00:09:53.976 [2024-12-16T01:31:24.634Z] Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:53.976 00:09:54.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.911 Nvme0n1 : 5.00 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:09:54.911 [2024-12-16T01:31:25.569Z] =================================================================================================================== 00:09:54.911 [2024-12-16T01:31:25.569Z] Total : 6680.20 26.09 0.00 0.00 0.00 0.00 0.00 00:09:54.911 00:09:55.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.847 Nvme0n1 : 6.00 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:09:55.847 [2024-12-16T01:31:26.505Z] =================================================================================================================== 00:09:55.847 [2024-12-16T01:31:26.505Z] Total : 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:09:55.847 00:09:56.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.784 Nvme0n1 : 7.00 6640.29 25.94 0.00 0.00 0.00 0.00 0.00 00:09:56.784 [2024-12-16T01:31:27.442Z] =================================================================================================================== 00:09:56.784 [2024-12-16T01:31:27.442Z] Total : 6640.29 25.94 0.00 0.00 0.00 0.00 0.00 00:09:56.784 00:09:57.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.721 Nvme0n1 : 8.00 6499.12 25.39 0.00 0.00 0.00 0.00 0.00 00:09:57.721 [2024-12-16T01:31:28.379Z] =================================================================================================================== 00:09:57.721 [2024-12-16T01:31:28.379Z] Total : 6499.12 25.39 0.00 0.00 0.00 0.00 0.00 00:09:57.721 00:09:58.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.673 Nvme0n1 : 9.00 6412.00 25.05 0.00 0.00 0.00 0.00 0.00 00:09:58.673 [2024-12-16T01:31:29.331Z] =================================================================================================================== 00:09:58.673 [2024-12-16T01:31:29.331Z] Total : 6412.00 25.05 0.00 0.00 0.00 0.00 0.00 00:09:58.673 00:09:59.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.632 Nvme0n1 : 10.00 6393.10 24.97 0.00 0.00 0.00 0.00 0.00 00:09:59.632 [2024-12-16T01:31:30.290Z] =================================================================================================================== 00:09:59.632 [2024-12-16T01:31:30.290Z] Total : 6393.10 24.97 0.00 0.00 0.00 0.00 0.00 00:09:59.632 00:09:59.632 00:09:59.632 Latency(us) 00:09:59.632 [2024-12-16T01:31:30.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.632 Nvme0n1 : 10.01 6400.90 25.00 0.00 0.00 19992.48 6345.08 178257.92 00:09:59.632 [2024-12-16T01:31:30.290Z] =================================================================================================================== 00:09:59.632 [2024-12-16T01:31:30.290Z] Total : 6400.90 25.00 0.00 0.00 19992.48 6345.08 178257.92 00:09:59.632 { 00:09:59.632 "results": [ 00:09:59.632 { 00:09:59.632 "job": "Nvme0n1", 00:09:59.632 "core_mask": "0x2", 00:09:59.632 "workload": "randwrite", 00:09:59.632 "status": "finished", 00:09:59.632 "queue_depth": 128, 00:09:59.632 "io_size": 4096, 00:09:59.632 "runtime": 10.007816, 00:09:59.632 "iops": 6400.897058858796, 00:09:59.632 "mibps": 25.003504136167173, 00:09:59.632 "io_failed": 0, 00:09:59.632 "io_timeout": 0, 00:09:59.632 "avg_latency_us": 19992.475967126895, 00:09:59.632 "min_latency_us": 6345.076363636364, 00:09:59.632 "max_latency_us": 178257.92 00:09:59.632 } 00:09:59.632 ], 00:09:59.632 "core_count": 1 00:09:59.632 } 00:09:59.632 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78265 00:09:59.632 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 78265 ']' 00:09:59.632 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 78265 00:09:59.632 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:59.632 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.632 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78265 00:09:59.892 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:59.892 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:59.892 killing process with pid 78265 00:09:59.892 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78265' 00:09:59.892 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 78265 00:09:59.892 Received shutdown signal, test time was about 10.000000 seconds 00:09:59.892 00:09:59.892 Latency(us) 00:09:59.892 [2024-12-16T01:31:30.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.892 [2024-12-16T01:31:30.550Z] =================================================================================================================== 00:09:59.892 [2024-12-16T01:31:30.550Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:59.892 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 78265 00:09:59.892 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:00.151 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:00.720 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:00.720 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:00.720 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:00.720 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:00.720 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 77921 00:10:00.720 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 77921 00:10:00.980 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 77921 Killed "${NVMF_APP[@]}" "$@" 00:10:00.980 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=78414 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 78414 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 78414 ']' 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.981 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:00.981 [2024-12-16 01:31:31.460908] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:00.981 [2024-12-16 01:31:31.460993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.981 [2024-12-16 01:31:31.608089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.981 [2024-12-16 01:31:31.627806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.981 [2024-12-16 01:31:31.627878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.981 [2024-12-16 01:31:31.627905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.981 [2024-12-16 01:31:31.627912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.981 [2024-12-16 01:31:31.627919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.981 [2024-12-16 01:31:31.628242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.241 [2024-12-16 01:31:31.657939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.241 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.241 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:01.241 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.241 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.241 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.241 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.241 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:01.500 [2024-12-16 01:31:32.017174] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:01.500 [2024-12-16 01:31:32.019759] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:01.500 [2024-12-16 01:31:32.020002] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1e27216f-00a7-4242-ab4f-a34c997aa9bb 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1e27216f-00a7-4242-ab4f-a34c997aa9bb 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.500 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:01.758 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e27216f-00a7-4242-ab4f-a34c997aa9bb -t 2000 00:10:02.017 [ 00:10:02.017 { 00:10:02.017 "name": "1e27216f-00a7-4242-ab4f-a34c997aa9bb", 00:10:02.017 "aliases": [ 00:10:02.017 "lvs/lvol" 00:10:02.017 ], 00:10:02.017 "product_name": "Logical Volume", 00:10:02.017 "block_size": 4096, 00:10:02.017 "num_blocks": 38912, 00:10:02.017 "uuid": "1e27216f-00a7-4242-ab4f-a34c997aa9bb", 00:10:02.017 "assigned_rate_limits": { 00:10:02.017 "rw_ios_per_sec": 0, 00:10:02.017 "rw_mbytes_per_sec": 0, 00:10:02.017 "r_mbytes_per_sec": 0, 00:10:02.017 "w_mbytes_per_sec": 0 00:10:02.017 }, 00:10:02.017 "claimed": false, 00:10:02.017 "zoned": false, 00:10:02.017 "supported_io_types": { 00:10:02.017 "read": true, 00:10:02.017 "write": true, 00:10:02.017 "unmap": true, 00:10:02.017 "flush": false, 00:10:02.017 "reset": true, 00:10:02.017 "nvme_admin": false, 00:10:02.017 "nvme_io": false, 00:10:02.017 "nvme_io_md": false, 00:10:02.017 "write_zeroes": true, 00:10:02.017 "zcopy": false, 00:10:02.018 "get_zone_info": false, 00:10:02.018 "zone_management": false, 00:10:02.018 "zone_append": false, 00:10:02.018 "compare": false, 00:10:02.018 "compare_and_write": false, 00:10:02.018 "abort": false, 00:10:02.018 "seek_hole": true, 00:10:02.018 "seek_data": true, 00:10:02.018 "copy": false, 00:10:02.018 "nvme_iov_md": false 00:10:02.018 }, 00:10:02.018 "driver_specific": { 00:10:02.018 "lvol": { 00:10:02.018 "lvol_store_uuid": "13130557-1b20-4b03-ab59-186b2fe1c713", 00:10:02.018 "base_bdev": "aio_bdev", 00:10:02.018 "thin_provision": false, 00:10:02.018 "num_allocated_clusters": 38, 00:10:02.018 "snapshot": false, 00:10:02.018 "clone": false, 00:10:02.018 "esnap_clone": false 00:10:02.018 } 00:10:02.018 } 00:10:02.018 } 00:10:02.018 ] 00:10:02.018 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:02.018 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:02.018 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:02.277 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:02.277 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:02.277 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:02.536 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:02.536 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:02.795 [2024-12-16 01:31:33.379864] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:02.795 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:03.054 request: 00:10:03.054 { 00:10:03.054 "uuid": "13130557-1b20-4b03-ab59-186b2fe1c713", 00:10:03.054 "method": "bdev_lvol_get_lvstores", 00:10:03.054 "req_id": 1 00:10:03.054 } 00:10:03.054 Got JSON-RPC error response 00:10:03.054 response: 00:10:03.054 { 00:10:03.054 "code": -19, 00:10:03.054 "message": "No such device" 00:10:03.054 } 00:10:03.054 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:03.054 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:03.054 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:03.054 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:03.054 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.314 aio_bdev 00:10:03.314 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1e27216f-00a7-4242-ab4f-a34c997aa9bb 00:10:03.314 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1e27216f-00a7-4242-ab4f-a34c997aa9bb 00:10:03.314 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.314 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:03.314 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.314 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.314 01:31:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:03.573 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e27216f-00a7-4242-ab4f-a34c997aa9bb -t 2000 00:10:03.832 [ 00:10:03.832 { 00:10:03.832 "name": "1e27216f-00a7-4242-ab4f-a34c997aa9bb", 00:10:03.832 "aliases": [ 00:10:03.832 "lvs/lvol" 00:10:03.832 ], 00:10:03.832 "product_name": "Logical Volume", 00:10:03.832 "block_size": 4096, 00:10:03.832 "num_blocks": 38912, 00:10:03.832 "uuid": "1e27216f-00a7-4242-ab4f-a34c997aa9bb", 00:10:03.832 "assigned_rate_limits": { 00:10:03.832 "rw_ios_per_sec": 0, 00:10:03.832 "rw_mbytes_per_sec": 0, 00:10:03.832 "r_mbytes_per_sec": 0, 00:10:03.832 "w_mbytes_per_sec": 0 00:10:03.832 }, 00:10:03.832 "claimed": false, 00:10:03.832 "zoned": false, 00:10:03.832 "supported_io_types": { 00:10:03.832 "read": true, 00:10:03.832 "write": true, 00:10:03.832 "unmap": true, 00:10:03.832 "flush": false, 00:10:03.832 "reset": true, 00:10:03.832 "nvme_admin": false, 00:10:03.832 "nvme_io": false, 00:10:03.832 "nvme_io_md": false, 00:10:03.832 "write_zeroes": true, 00:10:03.832 "zcopy": false, 00:10:03.832 "get_zone_info": false, 00:10:03.832 "zone_management": false, 00:10:03.832 "zone_append": false, 00:10:03.832 "compare": false, 00:10:03.832 "compare_and_write": false, 00:10:03.832 "abort": false, 00:10:03.833 "seek_hole": true, 00:10:03.833 "seek_data": true, 00:10:03.833 "copy": false, 00:10:03.833 "nvme_iov_md": false 00:10:03.833 }, 00:10:03.833 "driver_specific": { 00:10:03.833 "lvol": { 00:10:03.833 "lvol_store_uuid": "13130557-1b20-4b03-ab59-186b2fe1c713", 00:10:03.833 "base_bdev": "aio_bdev", 00:10:03.833 "thin_provision": false, 00:10:03.833 "num_allocated_clusters": 38, 00:10:03.833 "snapshot": false, 00:10:03.833 "clone": false, 00:10:03.833 "esnap_clone": false 00:10:03.833 } 00:10:03.833 } 00:10:03.833 } 00:10:03.833 ] 00:10:03.833 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:03.833 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:03.833 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:04.092 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:04.092 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:04.092 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:04.352 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:04.352 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1e27216f-00a7-4242-ab4f-a34c997aa9bb 00:10:04.610 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 13130557-1b20-4b03-ab59-186b2fe1c713 00:10:04.869 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:05.127 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:05.387 00:10:05.387 real 0m19.574s 00:10:05.387 user 0m39.834s 00:10:05.387 sys 0m8.926s 00:10:05.387 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.387 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:05.387 ************************************ 00:10:05.387 END TEST lvs_grow_dirty 00:10:05.387 ************************************ 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:05.387 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:05.387 nvmf_trace.0 00:10:05.646 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:05.646 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:05.646 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.646 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.553 rmmod nvme_tcp 00:10:07.553 rmmod nvme_fabrics 00:10:07.553 rmmod nvme_keyring 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 78414 ']' 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 78414 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 78414 ']' 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 78414 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78414 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.553 killing process with pid 78414 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78414' 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 78414 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 78414 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:07.553 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:07.553 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:10:07.813 ************************************ 00:10:07.813 END TEST nvmf_lvs_grow 00:10:07.813 ************************************ 00:10:07.813 00:10:07.813 real 0m40.806s 00:10:07.813 user 1m3.692s 00:10:07.813 sys 0m13.544s 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.813 ************************************ 00:10:07.813 START TEST nvmf_bdev_io_wait 00:10:07.813 ************************************ 00:10:07.813 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:07.813 * Looking for test storage... 00:10:07.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:07.814 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:07.814 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:07.814 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:08.074 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:08.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.075 --rc genhtml_branch_coverage=1 00:10:08.075 --rc genhtml_function_coverage=1 00:10:08.075 --rc genhtml_legend=1 00:10:08.075 --rc geninfo_all_blocks=1 00:10:08.075 --rc geninfo_unexecuted_blocks=1 00:10:08.075 00:10:08.075 ' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:08.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.075 --rc genhtml_branch_coverage=1 00:10:08.075 --rc genhtml_function_coverage=1 00:10:08.075 --rc genhtml_legend=1 00:10:08.075 --rc geninfo_all_blocks=1 00:10:08.075 --rc geninfo_unexecuted_blocks=1 00:10:08.075 00:10:08.075 ' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:08.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.075 --rc genhtml_branch_coverage=1 00:10:08.075 --rc genhtml_function_coverage=1 00:10:08.075 --rc genhtml_legend=1 00:10:08.075 --rc geninfo_all_blocks=1 00:10:08.075 --rc geninfo_unexecuted_blocks=1 00:10:08.075 00:10:08.075 ' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:08.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.075 --rc genhtml_branch_coverage=1 00:10:08.075 --rc genhtml_function_coverage=1 00:10:08.075 --rc genhtml_legend=1 00:10:08.075 --rc geninfo_all_blocks=1 00:10:08.075 --rc geninfo_unexecuted_blocks=1 00:10:08.075 00:10:08.075 ' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.075 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:08.075 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:08.076 Cannot find device "nvmf_init_br" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:08.076 Cannot find device "nvmf_init_br2" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:08.076 Cannot find device "nvmf_tgt_br" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:08.076 Cannot find device "nvmf_tgt_br2" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:08.076 Cannot find device "nvmf_init_br" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:08.076 Cannot find device "nvmf_init_br2" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:08.076 Cannot find device "nvmf_tgt_br" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:08.076 Cannot find device "nvmf_tgt_br2" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:08.076 Cannot find device "nvmf_br" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:08.076 Cannot find device "nvmf_init_if" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:08.076 Cannot find device "nvmf_init_if2" 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:08.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:08.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:08.076 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:08.336 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:08.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:08.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:08.336 00:10:08.337 --- 10.0.0.3 ping statistics --- 00:10:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.337 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:08.337 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:08.337 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:08.337 00:10:08.337 --- 10.0.0.4 ping statistics --- 00:10:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.337 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:08.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:08.337 00:10:08.337 --- 10.0.0.1 ping statistics --- 00:10:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.337 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:08.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:10:08.337 00:10:08.337 --- 10.0.0.2 ping statistics --- 00:10:08.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.337 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=78784 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 78784 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 78784 ']' 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.337 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.337 [2024-12-16 01:31:38.992118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:08.337 [2024-12-16 01:31:38.992198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.596 [2024-12-16 01:31:39.143021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.596 [2024-12-16 01:31:39.169055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.596 [2024-12-16 01:31:39.169111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.596 [2024-12-16 01:31:39.169124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.596 [2024-12-16 01:31:39.169134] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.596 [2024-12-16 01:31:39.169143] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.596 [2024-12-16 01:31:39.170074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.596 [2024-12-16 01:31:39.170209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.596 [2024-12-16 01:31:39.170323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.596 [2024-12-16 01:31:39.170325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 [2024-12-16 01:31:39.352200] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 [2024-12-16 01:31:39.367310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 Malloc0 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:08.857 [2024-12-16 01:31:39.416210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=78817 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=78819 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.857 { 00:10:08.857 "params": { 00:10:08.857 "name": "Nvme$subsystem", 00:10:08.857 "trtype": "$TEST_TRANSPORT", 00:10:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.857 "adrfam": "ipv4", 00:10:08.857 "trsvcid": "$NVMF_PORT", 00:10:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.857 "hdgst": ${hdgst:-false}, 00:10:08.857 "ddgst": ${ddgst:-false} 00:10:08.857 }, 00:10:08.857 "method": "bdev_nvme_attach_controller" 00:10:08.857 } 00:10:08.857 EOF 00:10:08.857 )") 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=78821 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.857 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.857 { 00:10:08.858 "params": { 00:10:08.858 "name": "Nvme$subsystem", 00:10:08.858 "trtype": "$TEST_TRANSPORT", 00:10:08.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.858 "adrfam": "ipv4", 00:10:08.858 "trsvcid": "$NVMF_PORT", 00:10:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.858 "hdgst": ${hdgst:-false}, 00:10:08.858 "ddgst": ${ddgst:-false} 00:10:08.858 }, 00:10:08.858 "method": "bdev_nvme_attach_controller" 00:10:08.858 } 00:10:08.858 EOF 00:10:08.858 )") 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=78824 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.858 { 00:10:08.858 "params": { 00:10:08.858 "name": "Nvme$subsystem", 00:10:08.858 "trtype": "$TEST_TRANSPORT", 00:10:08.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.858 "adrfam": "ipv4", 00:10:08.858 "trsvcid": "$NVMF_PORT", 00:10:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.858 "hdgst": ${hdgst:-false}, 00:10:08.858 "ddgst": ${ddgst:-false} 00:10:08.858 }, 00:10:08.858 "method": "bdev_nvme_attach_controller" 00:10:08.858 } 00:10:08.858 EOF 00:10:08.858 )") 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:08.858 { 00:10:08.858 "params": { 00:10:08.858 "name": "Nvme$subsystem", 00:10:08.858 "trtype": "$TEST_TRANSPORT", 00:10:08.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:08.858 "adrfam": "ipv4", 00:10:08.858 "trsvcid": "$NVMF_PORT", 00:10:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:08.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:08.858 "hdgst": ${hdgst:-false}, 00:10:08.858 "ddgst": ${ddgst:-false} 00:10:08.858 }, 00:10:08.858 "method": "bdev_nvme_attach_controller" 00:10:08.858 } 00:10:08.858 EOF 00:10:08.858 )") 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.858 "params": { 00:10:08.858 "name": "Nvme1", 00:10:08.858 "trtype": "tcp", 00:10:08.858 "traddr": "10.0.0.3", 00:10:08.858 "adrfam": "ipv4", 00:10:08.858 "trsvcid": "4420", 00:10:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.858 "hdgst": false, 00:10:08.858 "ddgst": false 00:10:08.858 }, 00:10:08.858 "method": "bdev_nvme_attach_controller" 00:10:08.858 }' 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.858 "params": { 00:10:08.858 "name": "Nvme1", 00:10:08.858 "trtype": "tcp", 00:10:08.858 "traddr": "10.0.0.3", 00:10:08.858 "adrfam": "ipv4", 00:10:08.858 "trsvcid": "4420", 00:10:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.858 "hdgst": false, 00:10:08.858 "ddgst": false 00:10:08.858 }, 00:10:08.858 "method": "bdev_nvme_attach_controller" 00:10:08.858 }' 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.858 "params": { 00:10:08.858 "name": "Nvme1", 00:10:08.858 "trtype": "tcp", 00:10:08.858 "traddr": "10.0.0.3", 00:10:08.858 "adrfam": "ipv4", 00:10:08.858 "trsvcid": "4420", 00:10:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.858 "hdgst": false, 00:10:08.858 "ddgst": false 00:10:08.858 }, 00:10:08.858 "method": "bdev_nvme_attach_controller" 00:10:08.858 }' 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:08.858 "params": { 00:10:08.858 "name": "Nvme1", 00:10:08.858 "trtype": "tcp", 00:10:08.858 "traddr": "10.0.0.3", 00:10:08.858 "adrfam": "ipv4", 00:10:08.858 "trsvcid": "4420", 00:10:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:08.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:08.858 "hdgst": false, 00:10:08.858 "ddgst": false 00:10:08.858 }, 00:10:08.858 "method": "bdev_nvme_attach_controller" 00:10:08.858 }' 00:10:08.858 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 78817 00:10:08.858 [2024-12-16 01:31:39.491554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:08.858 [2024-12-16 01:31:39.491640] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:08.858 [2024-12-16 01:31:39.493550] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:08.858 [2024-12-16 01:31:39.493622] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:09.118 [2024-12-16 01:31:39.513440] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:09.118 [2024-12-16 01:31:39.513546] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:09.118 [2024-12-16 01:31:39.522639] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:09.118 [2024-12-16 01:31:39.522733] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:09.118 [2024-12-16 01:31:39.684164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.118 [2024-12-16 01:31:39.701060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:09.118 [2024-12-16 01:31:39.715058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.118 [2024-12-16 01:31:39.727813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.118 [2024-12-16 01:31:39.744038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:10:09.118 [2024-12-16 01:31:39.757917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.118 [2024-12-16 01:31:39.772563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.377 [2024-12-16 01:31:39.788561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:09.377 [2024-12-16 01:31:39.802318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.377 Running I/O for 1 seconds... 00:10:09.377 [2024-12-16 01:31:39.850572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.377 Running I/O for 1 seconds... 00:10:09.377 [2024-12-16 01:31:39.868378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:09.377 [2024-12-16 01:31:39.886072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.377 Running I/O for 1 seconds... 00:10:09.377 Running I/O for 1 seconds... 00:10:10.313 6263.00 IOPS, 24.46 MiB/s 00:10:10.313 Latency(us) 00:10:10.313 [2024-12-16T01:31:40.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.313 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:10.313 Nvme1n1 : 1.02 6266.61 24.48 0.00 0.00 20197.76 6404.65 37653.41 00:10:10.313 [2024-12-16T01:31:40.972Z] =================================================================================================================== 00:10:10.314 [2024-12-16T01:31:40.972Z] Total : 6266.61 24.48 0.00 0.00 20197.76 6404.65 37653.41 00:10:10.314 9693.00 IOPS, 37.86 MiB/s 00:10:10.314 Latency(us) 00:10:10.314 [2024-12-16T01:31:40.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.314 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:10.314 Nvme1n1 : 1.01 9752.57 38.10 0.00 0.00 13066.62 7179.17 31695.59 00:10:10.314 [2024-12-16T01:31:40.972Z] =================================================================================================================== 00:10:10.314 [2024-12-16T01:31:40.972Z] Total : 9752.57 38.10 0.00 0.00 13066.62 7179.17 31695.59 00:10:10.314 158168.00 IOPS, 617.84 MiB/s 00:10:10.314 Latency(us) 00:10:10.314 [2024-12-16T01:31:40.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.314 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:10.314 Nvme1n1 : 1.00 157843.58 616.58 0.00 0.00 806.61 370.50 2517.18 00:10:10.314 [2024-12-16T01:31:40.972Z] =================================================================================================================== 00:10:10.314 [2024-12-16T01:31:40.972Z] Total : 157843.58 616.58 0.00 0.00 806.61 370.50 2517.18 00:10:10.314 01:31:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 78819 00:10:10.573 6153.00 IOPS, 24.04 MiB/s 00:10:10.573 Latency(us) 00:10:10.573 [2024-12-16T01:31:41.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.573 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:10.573 Nvme1n1 : 1.01 6238.53 24.37 0.00 0.00 20440.24 6076.97 44802.79 00:10:10.573 [2024-12-16T01:31:41.231Z] =================================================================================================================== 00:10:10.573 [2024-12-16T01:31:41.231Z] Total : 6238.53 24.37 0.00 0.00 20440.24 6076.97 44802.79 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 78821 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 78824 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:10.573 rmmod nvme_tcp 00:10:10.573 rmmod nvme_fabrics 00:10:10.573 rmmod nvme_keyring 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 78784 ']' 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 78784 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 78784 ']' 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 78784 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.573 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78784 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.833 killing process with pid 78784 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78784' 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 78784 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 78784 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:10.833 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.093 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:10:11.093 00:10:11.094 real 0m3.325s 00:10:11.094 user 0m13.076s 00:10:11.094 sys 0m2.081s 00:10:11.094 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.094 ************************************ 00:10:11.094 END TEST nvmf_bdev_io_wait 00:10:11.094 ************************************ 00:10:11.094 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.094 01:31:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:11.094 01:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.094 01:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.094 01:31:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.094 ************************************ 00:10:11.094 START TEST nvmf_queue_depth 00:10:11.094 ************************************ 00:10:11.094 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:11.355 * Looking for test storage... 00:10:11.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.355 --rc genhtml_branch_coverage=1 00:10:11.355 --rc genhtml_function_coverage=1 00:10:11.355 --rc genhtml_legend=1 00:10:11.355 --rc geninfo_all_blocks=1 00:10:11.355 --rc geninfo_unexecuted_blocks=1 00:10:11.355 00:10:11.355 ' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.355 --rc genhtml_branch_coverage=1 00:10:11.355 --rc genhtml_function_coverage=1 00:10:11.355 --rc genhtml_legend=1 00:10:11.355 --rc geninfo_all_blocks=1 00:10:11.355 --rc geninfo_unexecuted_blocks=1 00:10:11.355 00:10:11.355 ' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.355 --rc genhtml_branch_coverage=1 00:10:11.355 --rc genhtml_function_coverage=1 00:10:11.355 --rc genhtml_legend=1 00:10:11.355 --rc geninfo_all_blocks=1 00:10:11.355 --rc geninfo_unexecuted_blocks=1 00:10:11.355 00:10:11.355 ' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.355 --rc genhtml_branch_coverage=1 00:10:11.355 --rc genhtml_function_coverage=1 00:10:11.355 --rc genhtml_legend=1 00:10:11.355 --rc geninfo_all_blocks=1 00:10:11.355 --rc geninfo_unexecuted_blocks=1 00:10:11.355 00:10:11.355 ' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.355 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:11.356 Cannot find device "nvmf_init_br" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:11.356 Cannot find device "nvmf_init_br2" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:11.356 Cannot find device "nvmf_tgt_br" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.356 Cannot find device "nvmf_tgt_br2" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:11.356 Cannot find device "nvmf_init_br" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:11.356 Cannot find device "nvmf_init_br2" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:11.356 Cannot find device "nvmf_tgt_br" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:11.356 Cannot find device "nvmf_tgt_br2" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:11.356 Cannot find device "nvmf_br" 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:10:11.356 01:31:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:11.356 Cannot find device "nvmf_init_if" 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:11.616 Cannot find device "nvmf_init_if2" 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.616 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:11.616 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:11.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:11.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:11.617 00:10:11.617 --- 10.0.0.3 ping statistics --- 00:10:11.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.617 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:11.617 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:11.617 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:10:11.617 00:10:11.617 --- 10.0.0.4 ping statistics --- 00:10:11.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.617 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:11.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:11.617 00:10:11.617 --- 10.0.0.1 ping statistics --- 00:10:11.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.617 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:11.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:10:11.617 00:10:11.617 --- 10.0.0.2 ping statistics --- 00:10:11.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.617 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.617 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=79086 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 79086 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 79086 ']' 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.876 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:11.876 [2024-12-16 01:31:42.347247] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:11.876 [2024-12-16 01:31:42.347376] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.876 [2024-12-16 01:31:42.495995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.876 [2024-12-16 01:31:42.514853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.876 [2024-12-16 01:31:42.514919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.876 [2024-12-16 01:31:42.514944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.876 [2024-12-16 01:31:42.514952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.876 [2024-12-16 01:31:42.514959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.876 [2024-12-16 01:31:42.515230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.137 [2024-12-16 01:31:42.544259] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 [2024-12-16 01:31:42.645938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 Malloc0 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 [2024-12-16 01:31:42.695379] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=79105 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 79105 /var/tmp/bdevperf.sock 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 79105 ']' 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.137 01:31:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.137 [2024-12-16 01:31:42.761848] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:12.137 [2024-12-16 01:31:42.761968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79105 ] 00:10:12.397 [2024-12-16 01:31:42.911058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.397 [2024-12-16 01:31:42.930323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.397 [2024-12-16 01:31:42.959152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:12.397 01:31:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.397 01:31:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:12.397 01:31:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:12.397 01:31:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.397 01:31:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.657 NVMe0n1 00:10:12.657 01:31:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.657 01:31:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.657 Running I/O for 10 seconds... 00:10:14.977 7188.00 IOPS, 28.08 MiB/s [2024-12-16T01:31:46.573Z] 7813.00 IOPS, 30.52 MiB/s [2024-12-16T01:31:47.510Z] 8042.67 IOPS, 31.42 MiB/s [2024-12-16T01:31:48.447Z] 8205.00 IOPS, 32.05 MiB/s [2024-12-16T01:31:49.412Z] 8223.20 IOPS, 32.12 MiB/s [2024-12-16T01:31:50.356Z] 8227.00 IOPS, 32.14 MiB/s [2024-12-16T01:31:51.295Z] 8342.29 IOPS, 32.59 MiB/s [2024-12-16T01:31:52.672Z] 8402.88 IOPS, 32.82 MiB/s [2024-12-16T01:31:53.607Z] 8462.11 IOPS, 33.06 MiB/s [2024-12-16T01:31:53.607Z] 8511.30 IOPS, 33.25 MiB/s 00:10:22.949 Latency(us) 00:10:22.949 [2024-12-16T01:31:53.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.949 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:22.949 Verification LBA range: start 0x0 length 0x4000 00:10:22.949 NVMe0n1 : 10.10 8522.58 33.29 0.00 0.00 119565.48 24546.21 87222.46 00:10:22.949 [2024-12-16T01:31:53.607Z] =================================================================================================================== 00:10:22.949 [2024-12-16T01:31:53.607Z] Total : 8522.58 33.29 0.00 0.00 119565.48 24546.21 87222.46 00:10:22.949 { 00:10:22.949 "results": [ 00:10:22.949 { 00:10:22.949 "job": "NVMe0n1", 00:10:22.949 "core_mask": "0x1", 00:10:22.949 "workload": "verify", 00:10:22.949 "status": "finished", 00:10:22.949 "verify_range": { 00:10:22.949 "start": 0, 00:10:22.949 "length": 16384 00:10:22.949 }, 00:10:22.949 "queue_depth": 1024, 00:10:22.949 "io_size": 4096, 00:10:22.949 "runtime": 10.098934, 00:10:22.949 "iops": 8522.582680508656, 00:10:22.949 "mibps": 33.29133859573694, 00:10:22.949 "io_failed": 0, 00:10:22.949 "io_timeout": 0, 00:10:22.949 "avg_latency_us": 119565.48090350343, 00:10:22.949 "min_latency_us": 24546.21090909091, 00:10:22.949 "max_latency_us": 87222.45818181818 00:10:22.949 } 00:10:22.949 ], 00:10:22.949 "core_count": 1 00:10:22.949 } 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 79105 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 79105 ']' 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 79105 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79105 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.949 killing process with pid 79105 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79105' 00:10:22.949 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 79105 00:10:22.949 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.949 00:10:22.949 Latency(us) 00:10:22.949 [2024-12-16T01:31:53.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.949 [2024-12-16T01:31:53.607Z] =================================================================================================================== 00:10:22.949 [2024-12-16T01:31:53.607Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 79105 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.950 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.950 rmmod nvme_tcp 00:10:22.950 rmmod nvme_fabrics 00:10:23.209 rmmod nvme_keyring 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 79086 ']' 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 79086 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 79086 ']' 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 79086 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79086 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:23.209 killing process with pid 79086 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79086' 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 79086 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 79086 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:23.209 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.469 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:23.469 00:10:23.469 real 0m12.362s 00:10:23.469 user 0m21.205s 00:10:23.469 sys 0m2.097s 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:23.469 ************************************ 00:10:23.469 END TEST nvmf_queue_depth 00:10:23.469 ************************************ 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:23.469 ************************************ 00:10:23.469 START TEST nvmf_target_multipath 00:10:23.469 ************************************ 00:10:23.469 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:23.729 * Looking for test storage... 00:10:23.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:23.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.729 --rc genhtml_branch_coverage=1 00:10:23.729 --rc genhtml_function_coverage=1 00:10:23.729 --rc genhtml_legend=1 00:10:23.729 --rc geninfo_all_blocks=1 00:10:23.729 --rc geninfo_unexecuted_blocks=1 00:10:23.729 00:10:23.729 ' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:23.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.729 --rc genhtml_branch_coverage=1 00:10:23.729 --rc genhtml_function_coverage=1 00:10:23.729 --rc genhtml_legend=1 00:10:23.729 --rc geninfo_all_blocks=1 00:10:23.729 --rc geninfo_unexecuted_blocks=1 00:10:23.729 00:10:23.729 ' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:23.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.729 --rc genhtml_branch_coverage=1 00:10:23.729 --rc genhtml_function_coverage=1 00:10:23.729 --rc genhtml_legend=1 00:10:23.729 --rc geninfo_all_blocks=1 00:10:23.729 --rc geninfo_unexecuted_blocks=1 00:10:23.729 00:10:23.729 ' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:23.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:23.729 --rc genhtml_branch_coverage=1 00:10:23.729 --rc genhtml_function_coverage=1 00:10:23.729 --rc genhtml_legend=1 00:10:23.729 --rc geninfo_all_blocks=1 00:10:23.729 --rc geninfo_unexecuted_blocks=1 00:10:23.729 00:10:23.729 ' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:23.729 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:23.730 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:23.730 Cannot find device "nvmf_init_br" 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:23.730 Cannot find device "nvmf_init_br2" 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:23.730 Cannot find device "nvmf_tgt_br" 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:23.730 Cannot find device "nvmf_tgt_br2" 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:23.730 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:23.730 Cannot find device "nvmf_init_br" 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:23.989 Cannot find device "nvmf_init_br2" 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:23.989 Cannot find device "nvmf_tgt_br" 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:23.989 Cannot find device "nvmf_tgt_br2" 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:23.989 Cannot find device "nvmf_br" 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:23.989 Cannot find device "nvmf_init_if" 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:23.989 Cannot find device "nvmf_init_if2" 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:23.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:23.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:23.989 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:23.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:23.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:23.990 00:10:23.990 --- 10.0.0.3 ping statistics --- 00:10:23.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.990 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:23.990 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:24.249 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:24.249 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:10:24.249 00:10:24.249 --- 10.0.0.4 ping statistics --- 00:10:24.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.249 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:24.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:24.249 00:10:24.249 --- 10.0.0.1 ping statistics --- 00:10:24.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.249 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:24.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:10:24.249 00:10:24.249 --- 10.0.0.2 ping statistics --- 00:10:24.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.249 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=79470 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 79470 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 79470 ']' 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:24.249 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:24.249 [2024-12-16 01:31:54.737477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:24.249 [2024-12-16 01:31:54.737637] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.249 [2024-12-16 01:31:54.889150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.508 [2024-12-16 01:31:54.914641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.508 [2024-12-16 01:31:54.914914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.508 [2024-12-16 01:31:54.915074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.508 [2024-12-16 01:31:54.915218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.508 [2024-12-16 01:31:54.915268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.508 [2024-12-16 01:31:54.916278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.508 [2024-12-16 01:31:54.916404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.508 [2024-12-16 01:31:54.916564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.508 [2024-12-16 01:31:54.916571] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.508 [2024-12-16 01:31:54.951486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.508 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.508 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:10:24.508 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.508 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:24.508 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:24.508 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.508 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:24.767 [2024-12-16 01:31:55.260380] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.767 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:25.026 Malloc0 00:10:25.026 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:25.286 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.545 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:25.804 [2024-12-16 01:31:56.316599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:25.804 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:26.063 [2024-12-16 01:31:56.552803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:26.063 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:26.063 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:26.322 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.322 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:10:26.322 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.322 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:26.322 01:31:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=79552 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:28.229 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:28.489 [global] 00:10:28.489 thread=1 00:10:28.489 invalidate=1 00:10:28.489 rw=randrw 00:10:28.489 time_based=1 00:10:28.489 runtime=6 00:10:28.489 ioengine=libaio 00:10:28.489 direct=1 00:10:28.489 bs=4096 00:10:28.489 iodepth=128 00:10:28.489 norandommap=0 00:10:28.489 numjobs=1 00:10:28.489 00:10:28.489 verify_dump=1 00:10:28.489 verify_backlog=512 00:10:28.489 verify_state_save=0 00:10:28.489 do_verify=1 00:10:28.489 verify=crc32c-intel 00:10:28.489 [job0] 00:10:28.489 filename=/dev/nvme0n1 00:10:28.489 Could not set queue depth (nvme0n1) 00:10:28.489 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:28.489 fio-3.35 00:10:28.489 Starting 1 thread 00:10:29.427 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:29.686 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:29.946 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:30.205 01:32:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:30.464 01:32:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 79552 00:10:34.656 00:10:34.656 job0: (groupid=0, jobs=1): err= 0: pid=79573: Mon Dec 16 01:32:05 2024 00:10:34.656 read: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(240MiB/6007msec) 00:10:34.656 slat (usec): min=3, max=6139, avg=57.94, stdev=220.91 00:10:34.656 clat (usec): min=1387, max=16030, avg=8532.67, stdev=1414.45 00:10:34.656 lat (usec): min=1453, max=16046, avg=8590.61, stdev=1417.20 00:10:34.656 clat percentiles (usec): 00:10:34.656 | 1.00th=[ 4424], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 7832], 00:10:34.656 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8586], 00:10:34.656 | 70.00th=[ 8848], 80.00th=[ 9110], 90.00th=[ 9634], 95.00th=[11731], 00:10:34.656 | 99.00th=[13173], 99.50th=[13435], 99.90th=[13960], 99.95th=[14484], 00:10:34.656 | 99.99th=[15795] 00:10:34.656 bw ( KiB/s): min= 5408, max=26960, per=51.94%, avg=21207.33, stdev=7312.17, samples=12 00:10:34.656 iops : min= 1352, max= 6740, avg=5301.83, stdev=1828.04, samples=12 00:10:34.656 write: IOPS=6136, BW=24.0MiB/s (25.1MB/s)(125MiB/5208msec); 0 zone resets 00:10:34.656 slat (usec): min=13, max=5787, avg=65.81, stdev=162.15 00:10:34.656 clat (usec): min=1924, max=15009, avg=7475.52, stdev=1294.00 00:10:34.656 lat (usec): min=1968, max=15049, avg=7541.34, stdev=1298.37 00:10:34.656 clat percentiles (usec): 00:10:34.656 | 1.00th=[ 3359], 5.00th=[ 4490], 10.00th=[ 6063], 20.00th=[ 6980], 00:10:34.656 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 7832], 00:10:34.656 | 70.00th=[ 8029], 80.00th=[ 8225], 90.00th=[ 8586], 95.00th=[ 8848], 00:10:34.656 | 99.00th=[11469], 99.50th=[11994], 99.90th=[13698], 99.95th=[14091], 00:10:34.656 | 99.99th=[14746] 00:10:34.656 bw ( KiB/s): min= 5752, max=26592, per=86.62%, avg=21262.67, stdev=7084.16, samples=12 00:10:34.656 iops : min= 1438, max= 6648, avg=5315.67, stdev=1771.04, samples=12 00:10:34.656 lat (msec) : 2=0.03%, 4=1.36%, 10=92.98%, 20=5.64% 00:10:34.656 cpu : usr=5.41%, sys=21.78%, ctx=5447, majf=0, minf=108 00:10:34.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:34.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.656 issued rwts: total=61320,31958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.656 00:10:34.656 Run status group 0 (all jobs): 00:10:34.656 READ: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=240MiB (251MB), run=6007-6007msec 00:10:34.656 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=125MiB (131MB), run=5208-5208msec 00:10:34.656 00:10:34.656 Disk stats (read/write): 00:10:34.656 nvme0n1: ios=60461/31332, merge=0/0, ticks=495584/219752, in_queue=715336, util=98.60% 00:10:34.656 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:34.915 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=79661 00:10:35.484 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:35.484 [global] 00:10:35.484 thread=1 00:10:35.484 invalidate=1 00:10:35.484 rw=randrw 00:10:35.484 time_based=1 00:10:35.484 runtime=6 00:10:35.484 ioengine=libaio 00:10:35.484 direct=1 00:10:35.484 bs=4096 00:10:35.484 iodepth=128 00:10:35.484 norandommap=0 00:10:35.484 numjobs=1 00:10:35.484 00:10:35.484 verify_dump=1 00:10:35.484 verify_backlog=512 00:10:35.484 verify_state_save=0 00:10:35.484 do_verify=1 00:10:35.484 verify=crc32c-intel 00:10:35.484 [job0] 00:10:35.484 filename=/dev/nvme0n1 00:10:35.484 Could not set queue depth (nvme0n1) 00:10:35.484 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.484 fio-3.35 00:10:35.484 Starting 1 thread 00:10:36.423 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:36.683 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:36.943 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:37.201 01:32:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:37.460 01:32:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 79661 00:10:41.653 00:10:41.653 job0: (groupid=0, jobs=1): err= 0: pid=79682: Mon Dec 16 01:32:12 2024 00:10:41.653 read: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(264MiB/6007msec) 00:10:41.653 slat (usec): min=5, max=6400, avg=43.04, stdev=191.46 00:10:41.653 clat (usec): min=360, max=15240, avg=7722.42, stdev=2100.12 00:10:41.653 lat (usec): min=378, max=15250, avg=7765.46, stdev=2115.56 00:10:41.653 clat percentiles (usec): 00:10:41.653 | 1.00th=[ 2737], 5.00th=[ 3818], 10.00th=[ 4621], 20.00th=[ 5997], 00:10:41.653 | 30.00th=[ 7177], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:10:41.653 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11338], 00:10:41.653 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14484], 99.95th=[14615], 00:10:41.653 | 99.99th=[14746] 00:10:41.653 bw ( KiB/s): min=11400, max=36008, per=54.07%, avg=24341.55, stdev=6593.04, samples=11 00:10:41.653 iops : min= 2850, max= 9002, avg=6085.36, stdev=1648.26, samples=11 00:10:41.653 write: IOPS=6554, BW=25.6MiB/s (26.8MB/s)(143MiB/5578msec); 0 zone resets 00:10:41.653 slat (usec): min=15, max=1736, avg=55.69, stdev=140.07 00:10:41.653 clat (usec): min=1292, max=14652, avg=6583.21, stdev=1865.46 00:10:41.653 lat (usec): min=1317, max=14675, avg=6638.90, stdev=1880.91 00:10:41.653 clat percentiles (usec): 00:10:41.653 | 1.00th=[ 2573], 5.00th=[ 3326], 10.00th=[ 3785], 20.00th=[ 4490], 00:10:41.653 | 30.00th=[ 5407], 40.00th=[ 6783], 50.00th=[ 7242], 60.00th=[ 7504], 00:10:41.653 | 70.00th=[ 7767], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:10:41.653 | 99.00th=[11076], 99.50th=[11863], 99.90th=[13042], 99.95th=[13304], 00:10:41.653 | 99.99th=[14222] 00:10:41.653 bw ( KiB/s): min=11288, max=36856, per=92.69%, avg=24302.36, stdev=6619.14, samples=11 00:10:41.653 iops : min= 2822, max= 9214, avg=6075.55, stdev=1654.78, samples=11 00:10:41.653 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:10:41.653 lat (msec) : 2=0.28%, 4=8.18%, 10=86.36%, 20=5.12% 00:10:41.653 cpu : usr=6.31%, sys=22.48%, ctx=5812, majf=0, minf=90 00:10:41.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:41.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:41.653 issued rwts: total=67602,36561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.653 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:41.653 00:10:41.653 Run status group 0 (all jobs): 00:10:41.653 READ: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=264MiB (277MB), run=6007-6007msec 00:10:41.653 WRITE: bw=25.6MiB/s (26.8MB/s), 25.6MiB/s-25.6MiB/s (26.8MB/s-26.8MB/s), io=143MiB (150MB), run=5578-5578msec 00:10:41.653 00:10:41.653 Disk stats (read/write): 00:10:41.653 nvme0n1: ios=66757/35992, merge=0/0, ticks=493115/220934, in_queue=714049, util=98.63% 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:41.653 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.912 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:41.912 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:41.912 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:41.912 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:41.912 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.912 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:42.171 rmmod nvme_tcp 00:10:42.171 rmmod nvme_fabrics 00:10:42.171 rmmod nvme_keyring 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 79470 ']' 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 79470 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 79470 ']' 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 79470 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:42.171 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79470 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79470' 00:10:42.172 killing process with pid 79470 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 79470 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 79470 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:42.172 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:42.431 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:42.431 00:10:42.431 real 0m18.962s 00:10:42.431 user 1m10.313s 00:10:42.431 sys 0m9.884s 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.431 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:42.431 ************************************ 00:10:42.431 END TEST nvmf_target_multipath 00:10:42.431 ************************************ 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:42.692 ************************************ 00:10:42.692 START TEST nvmf_zcopy 00:10:42.692 ************************************ 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:42.692 * Looking for test storage... 00:10:42.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.692 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:42.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.692 --rc genhtml_branch_coverage=1 00:10:42.693 --rc genhtml_function_coverage=1 00:10:42.693 --rc genhtml_legend=1 00:10:42.693 --rc geninfo_all_blocks=1 00:10:42.693 --rc geninfo_unexecuted_blocks=1 00:10:42.693 00:10:42.693 ' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:42.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.693 --rc genhtml_branch_coverage=1 00:10:42.693 --rc genhtml_function_coverage=1 00:10:42.693 --rc genhtml_legend=1 00:10:42.693 --rc geninfo_all_blocks=1 00:10:42.693 --rc geninfo_unexecuted_blocks=1 00:10:42.693 00:10:42.693 ' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:42.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.693 --rc genhtml_branch_coverage=1 00:10:42.693 --rc genhtml_function_coverage=1 00:10:42.693 --rc genhtml_legend=1 00:10:42.693 --rc geninfo_all_blocks=1 00:10:42.693 --rc geninfo_unexecuted_blocks=1 00:10:42.693 00:10:42.693 ' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:42.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.693 --rc genhtml_branch_coverage=1 00:10:42.693 --rc genhtml_function_coverage=1 00:10:42.693 --rc genhtml_legend=1 00:10:42.693 --rc geninfo_all_blocks=1 00:10:42.693 --rc geninfo_unexecuted_blocks=1 00:10:42.693 00:10:42.693 ' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:42.693 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:42.694 Cannot find device "nvmf_init_br" 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:42.694 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:42.953 Cannot find device "nvmf_init_br2" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:42.953 Cannot find device "nvmf_tgt_br" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.953 Cannot find device "nvmf_tgt_br2" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:42.953 Cannot find device "nvmf_init_br" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:42.953 Cannot find device "nvmf_init_br2" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:42.953 Cannot find device "nvmf_tgt_br" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:42.953 Cannot find device "nvmf_tgt_br2" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:42.953 Cannot find device "nvmf_br" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:42.953 Cannot find device "nvmf_init_if" 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:42.953 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:42.953 Cannot find device "nvmf_init_if2" 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.954 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:42.954 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:43.213 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:43.213 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:10:43.213 00:10:43.213 --- 10.0.0.3 ping statistics --- 00:10:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.213 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:43.213 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:43.213 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:10:43.213 00:10:43.213 --- 10.0.0.4 ping statistics --- 00:10:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.213 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:43.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:43.213 00:10:43.213 --- 10.0.0.1 ping statistics --- 00:10:43.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.213 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:43.213 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:43.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:10:43.213 00:10:43.213 --- 10.0.0.2 ping statistics --- 00:10:43.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.214 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=79977 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 79977 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 79977 ']' 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.214 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.214 [2024-12-16 01:32:13.811068] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:43.214 [2024-12-16 01:32:13.811151] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.473 [2024-12-16 01:32:13.960588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.473 [2024-12-16 01:32:13.983162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.473 [2024-12-16 01:32:13.983228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.473 [2024-12-16 01:32:13.983241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.473 [2024-12-16 01:32:13.983251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.473 [2024-12-16 01:32:13.983259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.473 [2024-12-16 01:32:13.983643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.473 [2024-12-16 01:32:14.016587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.473 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.733 [2024-12-16 01:32:14.135595] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.733 [2024-12-16 01:32:14.151754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.733 malloc0 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:43.733 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:43.733 { 00:10:43.733 "params": { 00:10:43.733 "name": "Nvme$subsystem", 00:10:43.733 "trtype": "$TEST_TRANSPORT", 00:10:43.733 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.733 "adrfam": "ipv4", 00:10:43.733 "trsvcid": "$NVMF_PORT", 00:10:43.733 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.733 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.733 "hdgst": ${hdgst:-false}, 00:10:43.733 "ddgst": ${ddgst:-false} 00:10:43.733 }, 00:10:43.733 "method": "bdev_nvme_attach_controller" 00:10:43.733 } 00:10:43.733 EOF 00:10:43.734 )") 00:10:43.734 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:43.734 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:43.734 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:43.734 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:43.734 "params": { 00:10:43.734 "name": "Nvme1", 00:10:43.734 "trtype": "tcp", 00:10:43.734 "traddr": "10.0.0.3", 00:10:43.734 "adrfam": "ipv4", 00:10:43.734 "trsvcid": "4420", 00:10:43.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:43.734 "hdgst": false, 00:10:43.734 "ddgst": false 00:10:43.734 }, 00:10:43.734 "method": "bdev_nvme_attach_controller" 00:10:43.734 }' 00:10:43.734 [2024-12-16 01:32:14.250721] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:43.734 [2024-12-16 01:32:14.250816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80007 ] 00:10:43.995 [2024-12-16 01:32:14.400270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.995 [2024-12-16 01:32:14.419808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.995 [2024-12-16 01:32:14.456605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.995 Running I/O for 10 seconds... 00:10:46.317 6216.00 IOPS, 48.56 MiB/s [2024-12-16T01:32:17.913Z] 6312.00 IOPS, 49.31 MiB/s [2024-12-16T01:32:18.849Z] 6363.67 IOPS, 49.72 MiB/s [2024-12-16T01:32:19.786Z] 6376.50 IOPS, 49.82 MiB/s [2024-12-16T01:32:20.723Z] 6393.40 IOPS, 49.95 MiB/s [2024-12-16T01:32:21.660Z] 6399.67 IOPS, 50.00 MiB/s [2024-12-16T01:32:22.597Z] 6372.14 IOPS, 49.78 MiB/s [2024-12-16T01:32:23.975Z] 6305.50 IOPS, 49.26 MiB/s [2024-12-16T01:32:24.912Z] 6261.56 IOPS, 48.92 MiB/s [2024-12-16T01:32:24.912Z] 6255.60 IOPS, 48.87 MiB/s 00:10:54.254 Latency(us) 00:10:54.254 [2024-12-16T01:32:24.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.254 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:54.254 Verification LBA range: start 0x0 length 0x1000 00:10:54.254 Nvme1n1 : 10.01 6258.76 48.90 0.00 0.00 20386.67 3008.70 32648.84 00:10:54.254 [2024-12-16T01:32:24.912Z] =================================================================================================================== 00:10:54.254 [2024-12-16T01:32:24.912Z] Total : 6258.76 48.90 0.00 0.00 20386.67 3008.70 32648.84 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=80120 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:54.254 { 00:10:54.254 "params": { 00:10:54.254 "name": "Nvme$subsystem", 00:10:54.254 "trtype": "$TEST_TRANSPORT", 00:10:54.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:54.254 "adrfam": "ipv4", 00:10:54.254 "trsvcid": "$NVMF_PORT", 00:10:54.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:54.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:54.254 "hdgst": ${hdgst:-false}, 00:10:54.254 "ddgst": ${ddgst:-false} 00:10:54.254 }, 00:10:54.254 "method": "bdev_nvme_attach_controller" 00:10:54.254 } 00:10:54.254 EOF 00:10:54.254 )") 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:54.254 [2024-12-16 01:32:24.707778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.707845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:54.254 01:32:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:54.254 "params": { 00:10:54.254 "name": "Nvme1", 00:10:54.254 "trtype": "tcp", 00:10:54.254 "traddr": "10.0.0.3", 00:10:54.254 "adrfam": "ipv4", 00:10:54.254 "trsvcid": "4420", 00:10:54.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:54.254 "hdgst": false, 00:10:54.254 "ddgst": false 00:10:54.254 }, 00:10:54.254 "method": "bdev_nvme_attach_controller" 00:10:54.254 }' 00:10:54.254 [2024-12-16 01:32:24.719756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.719810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.727770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.727836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.739732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.739795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.751738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.751801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.763747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.763810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.765461] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:54.254 [2024-12-16 01:32:24.765597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80120 ] 00:10:54.254 [2024-12-16 01:32:24.775749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.775811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.787784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.787837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.799756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.799818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.811768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.811830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.823773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.823838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.835777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.835838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.847774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.847838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.859763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.859814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.871761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.871808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.883765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.883796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.895763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.254 [2024-12-16 01:32:24.895807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.254 [2024-12-16 01:32:24.907767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.255 [2024-12-16 01:32:24.907794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:24.913659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.514 [2024-12-16 01:32:24.919788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:24.919839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:24.931785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:24.931838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:24.935318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.514 [2024-12-16 01:32:24.943771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:24.943815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:24.955811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:24.955866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:24.967816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:24.967871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:24.973390] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:54.514 [2024-12-16 01:32:24.979807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:24.979854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:24.991808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:24.991859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.004047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.004096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.016137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.016204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.028076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.028123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.040084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.040130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.052105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.052150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.064117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.064165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 Running I/O for 5 seconds... 00:10:54.514 [2024-12-16 01:32:25.076119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.076163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.093523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.093587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.110504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.110564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.126588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.126644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.143765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.143814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.514 [2024-12-16 01:32:25.160195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.514 [2024-12-16 01:32:25.160242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.773 [2024-12-16 01:32:25.176904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.773 [2024-12-16 01:32:25.176951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.195794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.195843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.210361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.210408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.225957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.226005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.243943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.243989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.260065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.260112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.278199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.278246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.292887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.292937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.308289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.308325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.327367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.327402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.341405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.341452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.357252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.357298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.375928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.375975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.390967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.391013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.408003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.408055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.774 [2024-12-16 01:32:25.423364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.774 [2024-12-16 01:32:25.423412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.434017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.434064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.448337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.448371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.464030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.464077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.475274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.475323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.492022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.492069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.508583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.508628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.524871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.524918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.541069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.541116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.558713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.558761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.574143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.574190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.592333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.592382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.607201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.607249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.623500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.623574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.640981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.641033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.657476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.657520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.033 [2024-12-16 01:32:25.675157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.033 [2024-12-16 01:32:25.675205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.690448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.690497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.699779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.699829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.715959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.716007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.732240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.732286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.750292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.750339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.765629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.765680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.777098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.777145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.793938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.793984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.810319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.810366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.827974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.828022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.842276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.842323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.858561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.858634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.875746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.875783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.890779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.890814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.908404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.908453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.922977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.923025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.293 [2024-12-16 01:32:25.939335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.293 [2024-12-16 01:32:25.939385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.552 [2024-12-16 01:32:25.955439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.552 [2024-12-16 01:32:25.955488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.552 [2024-12-16 01:32:25.973308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.552 [2024-12-16 01:32:25.973357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.552 [2024-12-16 01:32:25.988502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.552 [2024-12-16 01:32:25.988564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.552 [2024-12-16 01:32:25.998576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.552 [2024-12-16 01:32:25.998633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.552 [2024-12-16 01:32:26.012405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.012452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.028132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.028179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.046997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.047043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.061484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.061602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 12214.00 IOPS, 95.42 MiB/s [2024-12-16T01:32:26.211Z] [2024-12-16 01:32:26.077683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.077719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.093381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.093429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.112057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.112104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.127182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.127247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.142498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.142572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.152110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.152158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.168563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.168607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.184633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.184671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.553 [2024-12-16 01:32:26.203254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.553 [2024-12-16 01:32:26.203306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.218550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.218626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.233005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.233053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.249308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.249355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.265220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.265269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.274741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.274806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.289638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.289689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.306216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.306264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.322944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.322991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.339646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.339695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.356384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.356432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.373265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.373313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.389430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.389477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.407536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.407597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.423748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.423797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.440668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.815 [2024-12-16 01:32:26.440714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.815 [2024-12-16 01:32:26.456996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.816 [2024-12-16 01:32:26.457043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.474499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.474564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.489698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.489735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.505899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.505960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.521992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.522029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.539749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.539800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.555373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.555421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.564402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.564450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.579923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.579973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.595601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.595650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.612642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.612691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.628991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.629039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.646442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.646477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.662481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.662517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.679894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.679945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.696059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.696108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.712979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.713027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.101 [2024-12-16 01:32:26.730503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.101 [2024-12-16 01:32:26.730592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.745844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.745880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.763304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.763352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.778670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.778722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.796893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.796944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.811851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.811901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.828289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.828336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.843124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.843171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.858823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.858871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.374 [2024-12-16 01:32:26.868477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.374 [2024-12-16 01:32:26.868550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.883788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.883846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.901090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.901142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.917435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.917486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.935093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.935141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.950582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.950645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.960286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.960335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.975196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.975229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:26.991382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:26.991430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:27.007406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:27.007455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.375 [2024-12-16 01:32:27.017299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.375 [2024-12-16 01:32:27.017347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.034109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.034160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.050267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.050315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.060130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.060180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 12167.50 IOPS, 95.06 MiB/s [2024-12-16T01:32:27.292Z] [2024-12-16 01:32:27.074788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.074836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.090677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.090727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.108031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.108080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.126250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.126301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.141508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.141564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.156691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.156726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.167076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.167125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.182653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.182704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.198982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.199030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.217149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.217199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.232282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.232330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.248335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.248382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.264352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.264399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.634 [2024-12-16 01:32:27.282241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.634 [2024-12-16 01:32:27.282289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.297718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.297771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.307033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.307081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.322815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.322865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.340283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.340332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.355588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.355636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.367521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.367596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.384049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.384096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.399785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.399834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.412104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.412138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.428870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.428906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.445162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.445211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.461422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.461487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.479369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.479416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.493371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.493437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.509587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.509639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.525080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.525129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.894 [2024-12-16 01:32:27.534247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.894 [2024-12-16 01:32:27.534295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.549931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.549995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.559654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.559703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.574806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.574858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.590102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.590149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.606957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.607005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.622314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.622363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.633965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.634012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.650854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.650902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.666287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.666324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.676523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.676629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.691753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.153 [2024-12-16 01:32:27.691802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.153 [2024-12-16 01:32:27.707822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.154 [2024-12-16 01:32:27.707871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.154 [2024-12-16 01:32:27.726746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.154 [2024-12-16 01:32:27.726796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.154 [2024-12-16 01:32:27.740869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.154 [2024-12-16 01:32:27.740919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.154 [2024-12-16 01:32:27.756494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.154 [2024-12-16 01:32:27.756567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.154 [2024-12-16 01:32:27.774006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.154 [2024-12-16 01:32:27.774052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.154 [2024-12-16 01:32:27.790894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.154 [2024-12-16 01:32:27.790956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.154 [2024-12-16 01:32:27.808680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.154 [2024-12-16 01:32:27.808726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.823428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.823478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.833094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.833145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.847986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.848049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.863150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.863199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.880733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.880784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.895765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.895816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.905941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.905978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.922094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.922145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.939775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.939825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.956316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.956364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.972833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.972882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:27.991645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:27.991695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:28.006129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:28.006176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:28.022427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:28.022475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:28.039108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.413 [2024-12-16 01:32:28.039156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.413 [2024-12-16 01:32:28.055664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.414 [2024-12-16 01:32:28.055713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.071605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.071653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 12199.67 IOPS, 95.31 MiB/s [2024-12-16T01:32:28.331Z] [2024-12-16 01:32:28.087803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.087838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.104460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.104509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.121201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.121250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.138347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.138385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.154127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.154176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.163312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.163361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.180047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.180227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.190313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.190436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.204449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.204642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.216298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.216406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.233935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.234042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.248618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.248729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.264598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.264750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.289731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.289919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.305447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.305642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.673 [2024-12-16 01:32:28.314836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.673 [2024-12-16 01:32:28.314997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.330486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.330558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.346875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.346922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.362160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.362208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.378438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.378485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.393943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.393989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.409565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.409615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.426818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.426867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.443311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.443363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.459730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.459779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.478556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.478635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.493258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.493306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.503138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.503185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.517975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.518021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.533466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.533547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.552074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.552123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.567637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.567699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.933 [2024-12-16 01:32:28.576685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.933 [2024-12-16 01:32:28.576732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.592318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.592366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.608137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.608185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.625007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.625055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.643790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.643839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.657435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.657483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.674266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.674314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.688579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.688638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.704463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.704512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.721724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.721775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.737785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.737850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.755811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.755858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.192 [2024-12-16 01:32:28.771426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.192 [2024-12-16 01:32:28.771474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.193 [2024-12-16 01:32:28.789382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.193 [2024-12-16 01:32:28.789431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.193 [2024-12-16 01:32:28.805607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.193 [2024-12-16 01:32:28.805658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.193 [2024-12-16 01:32:28.822494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.193 [2024-12-16 01:32:28.822566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.193 [2024-12-16 01:32:28.839247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.193 [2024-12-16 01:32:28.839294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.856031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.856076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.874123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.874172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.887939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.887987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.903179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.903228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.918998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.919046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.936523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.936594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.952169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.952215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.963822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.963870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.978834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.978883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:28.995212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:28.995259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:29.011425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:29.011472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:29.028959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:29.029006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:29.045403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:29.045450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:29.061635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:29.061682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:29.073055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:29.073103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 12258.25 IOPS, 95.77 MiB/s [2024-12-16T01:32:29.110Z] [2024-12-16 01:32:29.088522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:29.088576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.452 [2024-12-16 01:32:29.107364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.452 [2024-12-16 01:32:29.107411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.711 [2024-12-16 01:32:29.121932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.711 [2024-12-16 01:32:29.121978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.711 [2024-12-16 01:32:29.134223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.711 [2024-12-16 01:32:29.134272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.711 [2024-12-16 01:32:29.150946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.711 [2024-12-16 01:32:29.150995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.711 [2024-12-16 01:32:29.165710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.711 [2024-12-16 01:32:29.165758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.711 [2024-12-16 01:32:29.175548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.711 [2024-12-16 01:32:29.175652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.711 [2024-12-16 01:32:29.191158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.711 [2024-12-16 01:32:29.191204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.208981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.209029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.224007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.224053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.239483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.239557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.256686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.256735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.272673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.272720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.290980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.291027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.307362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.307409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.323504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.323560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.340667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.340713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.712 [2024-12-16 01:32:29.357116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.712 [2024-12-16 01:32:29.357163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.374214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.374261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.390508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.390583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.407442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.407489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.421771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.421823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.439605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.439699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.455261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.455328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.473415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.473465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.488339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.488392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.498166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.498214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.514149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.514196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.531510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.531584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.548028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.548075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.971 [2024-12-16 01:32:29.564928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.971 [2024-12-16 01:32:29.564974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.972 [2024-12-16 01:32:29.581366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.972 [2024-12-16 01:32:29.581413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.972 [2024-12-16 01:32:29.597736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.972 [2024-12-16 01:32:29.597784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.972 [2024-12-16 01:32:29.615105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.972 [2024-12-16 01:32:29.615152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.630779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.630827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.649599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.649635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.664153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.664200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.679456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.679507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.689130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.689177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.705082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.705132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.722791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.722855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.739715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.739782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.755722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.755790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.774314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.774378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.790550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.790638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.807168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.807240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.824231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.824317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.840068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.840142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.851977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.852045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.867568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.867646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.231 [2024-12-16 01:32:29.884732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.231 [2024-12-16 01:32:29.884814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.490 [2024-12-16 01:32:29.899457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.490 [2024-12-16 01:32:29.899513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.490 [2024-12-16 01:32:29.914533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.490 [2024-12-16 01:32:29.914608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.490 [2024-12-16 01:32:29.930909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.490 [2024-12-16 01:32:29.930955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:29.947896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:29.947943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:29.963022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:29.963070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:29.972845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:29.972891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:29.988799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:29.988861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.004596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.004643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.014837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.014876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.031100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.031150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.047818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.047868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.065664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.065701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 12273.00 IOPS, 95.88 MiB/s [2024-12-16T01:32:30.149Z] [2024-12-16 01:32:30.079489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.079559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 00:10:59.491 Latency(us) 00:10:59.491 [2024-12-16T01:32:30.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.491 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:59.491 Nvme1n1 : 5.01 12272.19 95.88 0.00 0.00 10417.13 4200.26 19303.33 00:10:59.491 [2024-12-16T01:32:30.149Z] =================================================================================================================== 00:10:59.491 [2024-12-16T01:32:30.149Z] Total : 12272.19 95.88 0.00 0.00 10417.13 4200.26 19303.33 00:10:59.491 [2024-12-16 01:32:30.089367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.089415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.101397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.101436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.113373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.113444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.125406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.125462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.491 [2024-12-16 01:32:30.137419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.491 [2024-12-16 01:32:30.137476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.750 [2024-12-16 01:32:30.149474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.750 [2024-12-16 01:32:30.149539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.750 [2024-12-16 01:32:30.165451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.750 [2024-12-16 01:32:30.165507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.750 [2024-12-16 01:32:30.177432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.750 [2024-12-16 01:32:30.177465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.750 [2024-12-16 01:32:30.189449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.750 [2024-12-16 01:32:30.189500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.750 [2024-12-16 01:32:30.201426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.750 [2024-12-16 01:32:30.201471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.750 [2024-12-16 01:32:30.213423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.750 [2024-12-16 01:32:30.213464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.750 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (80120) - No such process 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 80120 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.750 delay0 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.750 01:32:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:11:00.009 [2024-12-16 01:32:30.422974] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:06.567 Initializing NVMe Controllers 00:11:06.567 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.567 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:06.567 Initialization complete. Launching workers. 00:11:06.567 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 817 00:11:06.567 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1100, failed to submit 37 00:11:06.567 success 986, unsuccessful 114, failed 0 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.567 rmmod nvme_tcp 00:11:06.567 rmmod nvme_fabrics 00:11:06.567 rmmod nvme_keyring 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 79977 ']' 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 79977 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 79977 ']' 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 79977 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79977 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:06.567 killing process with pid 79977 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79977' 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 79977 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 79977 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:06.567 01:32:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:11:06.567 ************************************ 00:11:06.567 END TEST nvmf_zcopy 00:11:06.567 ************************************ 00:11:06.567 00:11:06.567 real 0m24.009s 00:11:06.567 user 0m39.584s 00:11:06.567 sys 0m6.501s 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.567 ************************************ 00:11:06.567 START TEST nvmf_nmic 00:11:06.567 ************************************ 00:11:06.567 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:06.826 * Looking for test storage... 00:11:06.826 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:06.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.826 --rc genhtml_branch_coverage=1 00:11:06.826 --rc genhtml_function_coverage=1 00:11:06.826 --rc genhtml_legend=1 00:11:06.826 --rc geninfo_all_blocks=1 00:11:06.826 --rc geninfo_unexecuted_blocks=1 00:11:06.826 00:11:06.826 ' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:06.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.826 --rc genhtml_branch_coverage=1 00:11:06.826 --rc genhtml_function_coverage=1 00:11:06.826 --rc genhtml_legend=1 00:11:06.826 --rc geninfo_all_blocks=1 00:11:06.826 --rc geninfo_unexecuted_blocks=1 00:11:06.826 00:11:06.826 ' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:06.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.826 --rc genhtml_branch_coverage=1 00:11:06.826 --rc genhtml_function_coverage=1 00:11:06.826 --rc genhtml_legend=1 00:11:06.826 --rc geninfo_all_blocks=1 00:11:06.826 --rc geninfo_unexecuted_blocks=1 00:11:06.826 00:11:06.826 ' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:06.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.826 --rc genhtml_branch_coverage=1 00:11:06.826 --rc genhtml_function_coverage=1 00:11:06.826 --rc genhtml_legend=1 00:11:06.826 --rc geninfo_all_blocks=1 00:11:06.826 --rc geninfo_unexecuted_blocks=1 00:11:06.826 00:11:06.826 ' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.826 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:06.827 Cannot find device "nvmf_init_br" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:06.827 Cannot find device "nvmf_init_br2" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:06.827 Cannot find device "nvmf_tgt_br" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.827 Cannot find device "nvmf_tgt_br2" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:06.827 Cannot find device "nvmf_init_br" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:06.827 Cannot find device "nvmf_init_br2" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:06.827 Cannot find device "nvmf_tgt_br" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:06.827 Cannot find device "nvmf_tgt_br2" 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:11:06.827 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:07.085 Cannot find device "nvmf_br" 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:07.085 Cannot find device "nvmf_init_if" 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:07.085 Cannot find device "nvmf_init_if2" 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:07.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:07.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:07.085 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:07.086 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:07.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:07.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:07.344 00:11:07.344 --- 10.0.0.3 ping statistics --- 00:11:07.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.344 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:07.344 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:07.344 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:11:07.344 00:11:07.344 --- 10.0.0.4 ping statistics --- 00:11:07.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.344 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:07.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:07.344 00:11:07.344 --- 10.0.0.1 ping statistics --- 00:11:07.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.344 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:07.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:11:07.344 00:11:07.344 --- 10.0.0.2 ping statistics --- 00:11:07.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.344 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=80500 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 80500 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 80500 ']' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.344 01:32:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.344 [2024-12-16 01:32:37.883293] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:07.344 [2024-12-16 01:32:37.883382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.603 [2024-12-16 01:32:38.032546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.603 [2024-12-16 01:32:38.059559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.603 [2024-12-16 01:32:38.059871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.603 [2024-12-16 01:32:38.060080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.603 [2024-12-16 01:32:38.060283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.603 [2024-12-16 01:32:38.060356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.603 [2024-12-16 01:32:38.061484] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.603 [2024-12-16 01:32:38.061603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.603 [2024-12-16 01:32:38.061646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.603 [2024-12-16 01:32:38.061787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.603 [2024-12-16 01:32:38.097254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.603 [2024-12-16 01:32:38.199045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.603 Malloc0 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.603 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 [2024-12-16 01:32:38.260884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 test case1: single bdev can't be used in multiple subsystems 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.861 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.861 [2024-12-16 01:32:38.288706] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:07.861 [2024-12-16 01:32:38.288765] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:07.861 [2024-12-16 01:32:38.288786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.861 request: 00:11:07.861 { 00:11:07.861 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:07.861 "namespace": { 00:11:07.861 "bdev_name": "Malloc0", 00:11:07.861 "no_auto_visible": false, 00:11:07.861 "hide_metadata": false 00:11:07.861 }, 00:11:07.861 "method": "nvmf_subsystem_add_ns", 00:11:07.861 "req_id": 1 00:11:07.861 } 00:11:07.861 Got JSON-RPC error response 00:11:07.861 response: 00:11:07.861 { 00:11:07.861 "code": -32602, 00:11:07.861 "message": "Invalid parameters" 00:11:07.861 } 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:07.862 Adding namespace failed - expected result. 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:07.862 test case2: host connect to nvmf target in multiple paths 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.862 [2024-12-16 01:32:38.300874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:07.862 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:11:08.120 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.120 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:08.120 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.120 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:08.120 01:32:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:10.017 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:10.017 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:10.017 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.017 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:10.017 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.017 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:10.017 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:10.017 [global] 00:11:10.017 thread=1 00:11:10.017 invalidate=1 00:11:10.017 rw=write 00:11:10.017 time_based=1 00:11:10.017 runtime=1 00:11:10.017 ioengine=libaio 00:11:10.017 direct=1 00:11:10.017 bs=4096 00:11:10.017 iodepth=1 00:11:10.017 norandommap=0 00:11:10.017 numjobs=1 00:11:10.017 00:11:10.017 verify_dump=1 00:11:10.017 verify_backlog=512 00:11:10.017 verify_state_save=0 00:11:10.017 do_verify=1 00:11:10.017 verify=crc32c-intel 00:11:10.017 [job0] 00:11:10.017 filename=/dev/nvme0n1 00:11:10.017 Could not set queue depth (nvme0n1) 00:11:10.274 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:10.274 fio-3.35 00:11:10.274 Starting 1 thread 00:11:11.650 00:11:11.650 job0: (groupid=0, jobs=1): err= 0: pid=80579: Mon Dec 16 01:32:41 2024 00:11:11.650 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:11.650 slat (nsec): min=11587, max=48938, avg=15050.54, stdev=4520.86 00:11:11.650 clat (usec): min=126, max=3971, avg=177.42, stdev=153.66 00:11:11.650 lat (usec): min=138, max=3992, avg=192.47, stdev=154.21 00:11:11.650 clat percentiles (usec): 00:11:11.650 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:11:11.650 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:11:11.650 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 210], 00:11:11.650 | 99.00th=[ 235], 99.50th=[ 247], 99.90th=[ 3326], 99.95th=[ 3687], 00:11:11.650 | 99.99th=[ 3982] 00:11:11.650 write: IOPS=3107, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1001msec); 0 zone resets 00:11:11.650 slat (usec): min=13, max=111, avg=21.28, stdev= 6.10 00:11:11.650 clat (usec): min=77, max=807, avg=106.60, stdev=21.31 00:11:11.650 lat (usec): min=94, max=829, avg=127.89, stdev=23.15 00:11:11.650 clat percentiles (usec): 00:11:11.650 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 93], 00:11:11.650 | 30.00th=[ 96], 40.00th=[ 99], 50.00th=[ 102], 60.00th=[ 108], 00:11:11.650 | 70.00th=[ 114], 80.00th=[ 121], 90.00th=[ 131], 95.00th=[ 139], 00:11:11.650 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 186], 99.95th=[ 253], 00:11:11.650 | 99.99th=[ 807] 00:11:11.650 bw ( KiB/s): min=12288, max=12288, per=98.85%, avg=12288.00, stdev= 0.00, samples=1 00:11:11.650 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:11.650 lat (usec) : 100=22.00%, 250=77.75%, 500=0.08%, 750=0.02%, 1000=0.03% 00:11:11.650 lat (msec) : 2=0.02%, 4=0.11% 00:11:11.650 cpu : usr=2.30%, sys=9.00%, ctx=6183, majf=0, minf=5 00:11:11.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.650 issued rwts: total=3072,3111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.650 00:11:11.650 Run status group 0 (all jobs): 00:11:11.650 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:11.650 WRITE: bw=12.1MiB/s (12.7MB/s), 12.1MiB/s-12.1MiB/s (12.7MB/s-12.7MB/s), io=12.2MiB (12.7MB), run=1001-1001msec 00:11:11.650 00:11:11.650 Disk stats (read/write): 00:11:11.650 nvme0n1: ios=2610/3064, merge=0/0, ticks=472/357, in_queue=829, util=90.98% 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:11.650 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:11.650 rmmod nvme_tcp 00:11:11.650 rmmod nvme_fabrics 00:11:11.650 rmmod nvme_keyring 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 80500 ']' 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 80500 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 80500 ']' 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 80500 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:11.650 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80500 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.651 killing process with pid 80500 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80500' 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 80500 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 80500 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:11.651 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:11:11.910 00:11:11.910 real 0m5.356s 00:11:11.910 user 0m15.507s 00:11:11.910 sys 0m2.327s 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.910 ************************************ 00:11:11.910 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:11.910 END TEST nvmf_nmic 00:11:11.910 ************************************ 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.170 ************************************ 00:11:12.170 START TEST nvmf_fio_target 00:11:12.170 ************************************ 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:12.170 * Looking for test storage... 00:11:12.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:12.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.170 --rc genhtml_branch_coverage=1 00:11:12.170 --rc genhtml_function_coverage=1 00:11:12.170 --rc genhtml_legend=1 00:11:12.170 --rc geninfo_all_blocks=1 00:11:12.170 --rc geninfo_unexecuted_blocks=1 00:11:12.170 00:11:12.170 ' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:12.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.170 --rc genhtml_branch_coverage=1 00:11:12.170 --rc genhtml_function_coverage=1 00:11:12.170 --rc genhtml_legend=1 00:11:12.170 --rc geninfo_all_blocks=1 00:11:12.170 --rc geninfo_unexecuted_blocks=1 00:11:12.170 00:11:12.170 ' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:12.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.170 --rc genhtml_branch_coverage=1 00:11:12.170 --rc genhtml_function_coverage=1 00:11:12.170 --rc genhtml_legend=1 00:11:12.170 --rc geninfo_all_blocks=1 00:11:12.170 --rc geninfo_unexecuted_blocks=1 00:11:12.170 00:11:12.170 ' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:12.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.170 --rc genhtml_branch_coverage=1 00:11:12.170 --rc genhtml_function_coverage=1 00:11:12.170 --rc genhtml_legend=1 00:11:12.170 --rc geninfo_all_blocks=1 00:11:12.170 --rc geninfo_unexecuted_blocks=1 00:11:12.170 00:11:12.170 ' 00:11:12.170 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.171 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:12.171 Cannot find device "nvmf_init_br" 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:12.171 Cannot find device "nvmf_init_br2" 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:12.171 Cannot find device "nvmf_tgt_br" 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:11:12.171 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:12.430 Cannot find device "nvmf_tgt_br2" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:12.430 Cannot find device "nvmf_init_br" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:12.430 Cannot find device "nvmf_init_br2" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:12.430 Cannot find device "nvmf_tgt_br" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:12.430 Cannot find device "nvmf_tgt_br2" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:12.430 Cannot find device "nvmf_br" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:12.430 Cannot find device "nvmf_init_if" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:12.430 Cannot find device "nvmf_init_if2" 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:12.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:12.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:12.430 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:12.430 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:12.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:12.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:12.707 00:11:12.707 --- 10.0.0.3 ping statistics --- 00:11:12.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.707 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:12.707 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:12.707 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:11:12.707 00:11:12.707 --- 10.0.0.4 ping statistics --- 00:11:12.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.707 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:12.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:12.707 00:11:12.707 --- 10.0.0.1 ping statistics --- 00:11:12.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.707 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:12.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:12.707 00:11:12.707 --- 10.0.0.2 ping statistics --- 00:11:12.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.707 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=80814 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 80814 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 80814 ']' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.707 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.708 [2024-12-16 01:32:43.260085] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:12.708 [2024-12-16 01:32:43.260173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.988 [2024-12-16 01:32:43.407673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.988 [2024-12-16 01:32:43.428938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.988 [2024-12-16 01:32:43.428996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.988 [2024-12-16 01:32:43.429008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.988 [2024-12-16 01:32:43.429016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.988 [2024-12-16 01:32:43.429024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.988 [2024-12-16 01:32:43.429842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.988 [2024-12-16 01:32:43.430784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.988 [2024-12-16 01:32:43.430881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.988 [2024-12-16 01:32:43.430897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.988 [2024-12-16 01:32:43.460779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:12.988 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:12.988 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:12.988 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.988 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:12.988 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.988 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.988 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:13.246 [2024-12-16 01:32:43.839278] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.246 01:32:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.812 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:13.812 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.071 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:14.071 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.329 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:14.329 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.588 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:14.588 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:14.846 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.104 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:15.104 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.362 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:15.362 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:15.621 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:15.621 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:15.880 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.138 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:16.138 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.396 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:16.396 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:16.655 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:16.913 [2024-12-16 01:32:47.474153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:16.913 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:17.172 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:17.431 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:17.689 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:17.689 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:17.689 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.689 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:17.689 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:17.689 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.592 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.592 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.592 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.592 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:19.592 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.592 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:19.592 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:19.592 [global] 00:11:19.592 thread=1 00:11:19.592 invalidate=1 00:11:19.592 rw=write 00:11:19.592 time_based=1 00:11:19.592 runtime=1 00:11:19.592 ioengine=libaio 00:11:19.592 direct=1 00:11:19.592 bs=4096 00:11:19.592 iodepth=1 00:11:19.592 norandommap=0 00:11:19.592 numjobs=1 00:11:19.592 00:11:19.592 verify_dump=1 00:11:19.592 verify_backlog=512 00:11:19.592 verify_state_save=0 00:11:19.592 do_verify=1 00:11:19.592 verify=crc32c-intel 00:11:19.592 [job0] 00:11:19.592 filename=/dev/nvme0n1 00:11:19.592 [job1] 00:11:19.592 filename=/dev/nvme0n2 00:11:19.592 [job2] 00:11:19.592 filename=/dev/nvme0n3 00:11:19.592 [job3] 00:11:19.592 filename=/dev/nvme0n4 00:11:19.850 Could not set queue depth (nvme0n1) 00:11:19.851 Could not set queue depth (nvme0n2) 00:11:19.851 Could not set queue depth (nvme0n3) 00:11:19.851 Could not set queue depth (nvme0n4) 00:11:19.851 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.851 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.851 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.851 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.851 fio-3.35 00:11:19.851 Starting 4 threads 00:11:21.227 00:11:21.227 job0: (groupid=0, jobs=1): err= 0: pid=80991: Mon Dec 16 01:32:51 2024 00:11:21.227 read: IOPS=1751, BW=7005KiB/s (7173kB/s)(7012KiB/1001msec) 00:11:21.227 slat (nsec): min=12837, max=41869, avg=15486.74, stdev=3130.68 00:11:21.227 clat (usec): min=153, max=5378, avg=289.28, stdev=168.50 00:11:21.227 lat (usec): min=172, max=5412, avg=304.77, stdev=169.32 00:11:21.227 clat percentiles (usec): 00:11:21.227 | 1.00th=[ 198], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:11:21.227 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 273], 60.00th=[ 277], 00:11:21.227 | 70.00th=[ 285], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 367], 00:11:21.227 | 99.00th=[ 515], 99.50th=[ 562], 99.90th=[ 4015], 99.95th=[ 5407], 00:11:21.227 | 99.99th=[ 5407] 00:11:21.227 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:21.227 slat (nsec): min=17280, max=74057, avg=22165.23, stdev=4821.77 00:11:21.227 clat (usec): min=105, max=6803, avg=201.90, stdev=155.27 00:11:21.227 lat (usec): min=131, max=6826, avg=224.06, stdev=155.70 00:11:21.227 clat percentiles (usec): 00:11:21.227 | 1.00th=[ 116], 5.00th=[ 129], 10.00th=[ 149], 20.00th=[ 188], 00:11:21.227 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:11:21.227 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 231], 00:11:21.227 | 99.00th=[ 338], 99.50th=[ 371], 99.90th=[ 506], 99.95th=[ 2008], 00:11:21.227 | 99.99th=[ 6783] 00:11:21.227 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:21.227 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:21.227 lat (usec) : 250=54.75%, 500=44.54%, 750=0.58% 00:11:21.227 lat (msec) : 4=0.05%, 10=0.08% 00:11:21.227 cpu : usr=1.10%, sys=6.20%, ctx=3801, majf=0, minf=13 00:11:21.227 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.227 issued rwts: total=1753,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.227 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.227 job1: (groupid=0, jobs=1): err= 0: pid=80992: Mon Dec 16 01:32:51 2024 00:11:21.227 read: IOPS=2930, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1001msec) 00:11:21.227 slat (nsec): min=12105, max=58833, avg=14345.69, stdev=2441.99 00:11:21.227 clat (usec): min=140, max=1859, avg=168.43, stdev=46.23 00:11:21.227 lat (usec): min=153, max=1876, avg=182.77, stdev=46.46 00:11:21.227 clat percentiles (usec): 00:11:21.227 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:11:21.227 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:21.227 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:11:21.227 | 99.00th=[ 204], 99.50th=[ 215], 99.90th=[ 429], 99.95th=[ 1860], 00:11:21.227 | 99.99th=[ 1860] 00:11:21.227 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:21.227 slat (nsec): min=14885, max=94573, avg=21305.72, stdev=3635.43 00:11:21.227 clat (usec): min=94, max=568, avg=126.36, stdev=13.77 00:11:21.227 lat (usec): min=114, max=587, avg=147.66, stdev=14.51 00:11:21.227 clat percentiles (usec): 00:11:21.227 | 1.00th=[ 102], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 118], 00:11:21.227 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 129], 00:11:21.227 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:11:21.227 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 182], 99.95th=[ 243], 00:11:21.227 | 99.99th=[ 570] 00:11:21.227 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.227 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.227 lat (usec) : 100=0.33%, 250=99.55%, 500=0.07%, 750=0.02% 00:11:21.227 lat (msec) : 2=0.03% 00:11:21.227 cpu : usr=2.00%, sys=8.90%, ctx=6006, majf=0, minf=11 00:11:21.227 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.228 issued rwts: total=2933,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.228 job2: (groupid=0, jobs=1): err= 0: pid=80993: Mon Dec 16 01:32:51 2024 00:11:21.228 read: IOPS=1778, BW=7113KiB/s (7284kB/s)(7120KiB/1001msec) 00:11:21.228 slat (nsec): min=12504, max=43661, avg=15996.69, stdev=4012.24 00:11:21.228 clat (usec): min=159, max=2458, avg=288.17, stdev=82.28 00:11:21.228 lat (usec): min=174, max=2487, avg=304.17, stdev=83.97 00:11:21.228 clat percentiles (usec): 00:11:21.228 | 1.00th=[ 176], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:11:21.228 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:11:21.228 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 445], 00:11:21.228 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 1450], 99.95th=[ 2474], 00:11:21.228 | 99.99th=[ 2474] 00:11:21.228 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:21.228 slat (nsec): min=17853, max=99363, avg=22141.58, stdev=5075.45 00:11:21.228 clat (usec): min=112, max=1803, avg=198.39, stdev=45.50 00:11:21.228 lat (usec): min=132, max=1841, avg=220.54, stdev=46.06 00:11:21.228 clat percentiles (usec): 00:11:21.228 | 1.00th=[ 121], 5.00th=[ 135], 10.00th=[ 169], 20.00th=[ 188], 00:11:21.228 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:11:21.228 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 229], 00:11:21.228 | 99.00th=[ 249], 99.50th=[ 265], 99.90th=[ 515], 99.95th=[ 529], 00:11:21.228 | 99.99th=[ 1811] 00:11:21.228 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:21.228 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:21.228 lat (usec) : 250=55.75%, 500=43.36%, 750=0.78%, 1000=0.03% 00:11:21.228 lat (msec) : 2=0.05%, 4=0.03% 00:11:21.228 cpu : usr=1.50%, sys=6.00%, ctx=3828, majf=0, minf=3 00:11:21.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.228 issued rwts: total=1780,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.228 job3: (groupid=0, jobs=1): err= 0: pid=80994: Mon Dec 16 01:32:51 2024 00:11:21.228 read: IOPS=2642, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:11:21.228 slat (nsec): min=11635, max=32277, avg=13227.78, stdev=1372.84 00:11:21.228 clat (usec): min=150, max=531, avg=177.10, stdev=14.34 00:11:21.228 lat (usec): min=163, max=545, avg=190.33, stdev=14.44 00:11:21.228 clat percentiles (usec): 00:11:21.228 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:11:21.228 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:11:21.228 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:11:21.228 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 225], 99.95th=[ 445], 00:11:21.228 | 99.99th=[ 529] 00:11:21.228 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:21.228 slat (nsec): min=15056, max=94593, avg=21996.05, stdev=6957.22 00:11:21.228 clat (usec): min=106, max=237, avg=136.71, stdev=12.82 00:11:21.228 lat (usec): min=125, max=331, avg=158.70, stdev=16.15 00:11:21.228 clat percentiles (usec): 00:11:21.228 | 1.00th=[ 112], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 127], 00:11:21.228 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:11:21.228 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:11:21.228 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 231], 00:11:21.228 | 99.99th=[ 237] 00:11:21.228 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:21.228 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:21.228 lat (usec) : 250=99.97%, 500=0.02%, 750=0.02% 00:11:21.228 cpu : usr=2.00%, sys=8.20%, ctx=5719, majf=0, minf=17 00:11:21.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.228 issued rwts: total=2645,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.228 00:11:21.228 Run status group 0 (all jobs): 00:11:21.228 READ: bw=35.6MiB/s (37.3MB/s), 7005KiB/s-11.4MiB/s (7173kB/s-12.0MB/s), io=35.6MiB (37.3MB), run=1001-1001msec 00:11:21.228 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:11:21.228 00:11:21.228 Disk stats (read/write): 00:11:21.228 nvme0n1: ios=1586/1675, merge=0/0, ticks=472/358, in_queue=830, util=86.57% 00:11:21.228 nvme0n2: ios=2595/2560, merge=0/0, ticks=480/339, in_queue=819, util=88.64% 00:11:21.228 nvme0n3: ios=1536/1745, merge=0/0, ticks=441/354, in_queue=795, util=88.79% 00:11:21.228 nvme0n4: ios=2311/2560, merge=0/0, ticks=412/370, in_queue=782, util=89.74% 00:11:21.228 01:32:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:21.228 [global] 00:11:21.228 thread=1 00:11:21.228 invalidate=1 00:11:21.228 rw=randwrite 00:11:21.228 time_based=1 00:11:21.228 runtime=1 00:11:21.228 ioengine=libaio 00:11:21.228 direct=1 00:11:21.228 bs=4096 00:11:21.228 iodepth=1 00:11:21.228 norandommap=0 00:11:21.228 numjobs=1 00:11:21.228 00:11:21.228 verify_dump=1 00:11:21.228 verify_backlog=512 00:11:21.228 verify_state_save=0 00:11:21.228 do_verify=1 00:11:21.228 verify=crc32c-intel 00:11:21.228 [job0] 00:11:21.228 filename=/dev/nvme0n1 00:11:21.228 [job1] 00:11:21.228 filename=/dev/nvme0n2 00:11:21.228 [job2] 00:11:21.228 filename=/dev/nvme0n3 00:11:21.228 [job3] 00:11:21.228 filename=/dev/nvme0n4 00:11:21.228 Could not set queue depth (nvme0n1) 00:11:21.228 Could not set queue depth (nvme0n2) 00:11:21.228 Could not set queue depth (nvme0n3) 00:11:21.228 Could not set queue depth (nvme0n4) 00:11:21.228 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.228 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.228 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.228 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:21.228 fio-3.35 00:11:21.228 Starting 4 threads 00:11:22.605 00:11:22.605 job0: (groupid=0, jobs=1): err= 0: pid=81057: Mon Dec 16 01:32:52 2024 00:11:22.605 read: IOPS=1985, BW=7940KiB/s (8131kB/s)(7948KiB/1001msec) 00:11:22.605 slat (nsec): min=9012, max=53588, avg=12685.82, stdev=3928.15 00:11:22.605 clat (usec): min=168, max=439, avg=260.19, stdev=18.35 00:11:22.605 lat (usec): min=182, max=456, avg=272.87, stdev=18.66 00:11:22.605 clat percentiles (usec): 00:11:22.605 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:11:22.605 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 262], 00:11:22.605 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:11:22.605 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 420], 99.95th=[ 441], 00:11:22.605 | 99.99th=[ 441] 00:11:22.605 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:22.605 slat (nsec): min=11029, max=69355, avg=16762.88, stdev=4578.20 00:11:22.605 clat (usec): min=112, max=420, avg=203.82, stdev=17.05 00:11:22.605 lat (usec): min=132, max=435, avg=220.59, stdev=17.48 00:11:22.605 clat percentiles (usec): 00:11:22.605 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:11:22.605 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:11:22.605 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 233], 00:11:22.605 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 285], 99.95th=[ 306], 00:11:22.605 | 99.99th=[ 420] 00:11:22.605 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:22.605 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:22.605 lat (usec) : 250=63.32%, 500=36.68% 00:11:22.605 cpu : usr=1.40%, sys=5.10%, ctx=4039, majf=0, minf=13 00:11:22.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.605 issued rwts: total=1987,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.605 job1: (groupid=0, jobs=1): err= 0: pid=81058: Mon Dec 16 01:32:52 2024 00:11:22.605 read: IOPS=2850, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec) 00:11:22.605 slat (nsec): min=11488, max=39332, avg=13729.10, stdev=2050.41 00:11:22.605 clat (usec): min=139, max=2117, avg=172.81, stdev=40.33 00:11:22.605 lat (usec): min=152, max=2129, avg=186.54, stdev=40.40 00:11:22.605 clat percentiles (usec): 00:11:22.605 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:11:22.605 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:11:22.605 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:11:22.605 | 99.00th=[ 210], 99.50th=[ 221], 99.90th=[ 469], 99.95th=[ 668], 00:11:22.605 | 99.99th=[ 2114] 00:11:22.605 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:22.605 slat (nsec): min=14538, max=80928, avg=20340.84, stdev=3496.05 00:11:22.605 clat (usec): min=96, max=1564, avg=128.30, stdev=28.87 00:11:22.605 lat (usec): min=114, max=1583, avg=148.64, stdev=29.11 00:11:22.605 clat percentiles (usec): 00:11:22.605 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 120], 00:11:22.605 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:11:22.605 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:11:22.605 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 227], 99.95th=[ 474], 00:11:22.605 | 99.99th=[ 1565] 00:11:22.605 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:22.605 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:22.605 lat (usec) : 100=0.05%, 250=99.76%, 500=0.14%, 750=0.02% 00:11:22.605 lat (msec) : 2=0.02%, 4=0.02% 00:11:22.605 cpu : usr=2.00%, sys=8.60%, ctx=5926, majf=0, minf=17 00:11:22.605 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.605 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.605 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.605 issued rwts: total=2853,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.605 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.605 job2: (groupid=0, jobs=1): err= 0: pid=81059: Mon Dec 16 01:32:52 2024 00:11:22.605 read: IOPS=2604, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:11:22.605 slat (nsec): min=12692, max=40878, avg=14165.71, stdev=1587.08 00:11:22.605 clat (usec): min=153, max=603, avg=178.77, stdev=15.57 00:11:22.605 lat (usec): min=167, max=624, avg=192.93, stdev=15.73 00:11:22.605 clat percentiles (usec): 00:11:22.605 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:11:22.605 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:11:22.605 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:11:22.605 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 233], 99.95th=[ 478], 00:11:22.605 | 99.99th=[ 603] 00:11:22.605 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:22.605 slat (nsec): min=14883, max=79920, avg=19914.61, stdev=2577.52 00:11:22.605 clat (usec): min=107, max=450, avg=138.80, stdev=13.50 00:11:22.605 lat (usec): min=126, max=469, avg=158.71, stdev=13.74 00:11:22.605 clat percentiles (usec): 00:11:22.605 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 129], 00:11:22.605 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:11:22.605 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:11:22.605 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 198], 99.95th=[ 206], 00:11:22.605 | 99.99th=[ 449] 00:11:22.605 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:11:22.606 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:22.606 lat (usec) : 250=99.95%, 500=0.04%, 750=0.02% 00:11:22.606 cpu : usr=2.50%, sys=7.40%, ctx=5679, majf=0, minf=5 00:11:22.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.606 issued rwts: total=2607,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.606 job3: (groupid=0, jobs=1): err= 0: pid=81060: Mon Dec 16 01:32:52 2024 00:11:22.606 read: IOPS=1983, BW=7932KiB/s (8122kB/s)(7940KiB/1001msec) 00:11:22.606 slat (usec): min=9, max=105, avg=13.08, stdev= 4.67 00:11:22.606 clat (usec): min=176, max=459, avg=260.03, stdev=18.01 00:11:22.606 lat (usec): min=195, max=469, avg=273.11, stdev=18.33 00:11:22.606 clat percentiles (usec): 00:11:22.606 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 247], 00:11:22.606 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 258], 60.00th=[ 262], 00:11:22.606 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:11:22.606 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 412], 99.95th=[ 461], 00:11:22.606 | 99.99th=[ 461] 00:11:22.606 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:22.606 slat (nsec): min=11138, max=68440, avg=19309.07, stdev=4439.30 00:11:22.606 clat (usec): min=158, max=294, avg=201.00, stdev=16.13 00:11:22.606 lat (usec): min=178, max=315, avg=220.31, stdev=16.44 00:11:22.606 clat percentiles (usec): 00:11:22.606 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:11:22.606 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:11:22.606 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 221], 95.00th=[ 231], 00:11:22.606 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 262], 99.95th=[ 269], 00:11:22.606 | 99.99th=[ 293] 00:11:22.606 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:11:22.606 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:22.606 lat (usec) : 250=63.80%, 500=36.20% 00:11:22.606 cpu : usr=1.90%, sys=5.40%, ctx=4040, majf=0, minf=11 00:11:22.606 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.606 issued rwts: total=1985,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.606 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.606 00:11:22.606 Run status group 0 (all jobs): 00:11:22.606 READ: bw=36.8MiB/s (38.6MB/s), 7932KiB/s-11.1MiB/s (8122kB/s-11.7MB/s), io=36.8MiB (38.6MB), run=1001-1001msec 00:11:22.606 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:11:22.606 00:11:22.606 Disk stats (read/write): 00:11:22.606 nvme0n1: ios=1586/2028, merge=0/0, ticks=399/381, in_queue=780, util=88.48% 00:11:22.606 nvme0n2: ios=2609/2577, merge=0/0, ticks=462/348, in_queue=810, util=89.60% 00:11:22.606 nvme0n3: ios=2391/2560, merge=0/0, ticks=463/378, in_queue=841, util=90.04% 00:11:22.606 nvme0n4: ios=1542/2026, merge=0/0, ticks=388/412, in_queue=800, util=89.79% 00:11:22.606 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:22.606 [global] 00:11:22.606 thread=1 00:11:22.606 invalidate=1 00:11:22.606 rw=write 00:11:22.606 time_based=1 00:11:22.606 runtime=1 00:11:22.606 ioengine=libaio 00:11:22.606 direct=1 00:11:22.606 bs=4096 00:11:22.606 iodepth=128 00:11:22.606 norandommap=0 00:11:22.606 numjobs=1 00:11:22.606 00:11:22.606 verify_dump=1 00:11:22.606 verify_backlog=512 00:11:22.606 verify_state_save=0 00:11:22.606 do_verify=1 00:11:22.606 verify=crc32c-intel 00:11:22.606 [job0] 00:11:22.606 filename=/dev/nvme0n1 00:11:22.606 [job1] 00:11:22.606 filename=/dev/nvme0n2 00:11:22.606 [job2] 00:11:22.606 filename=/dev/nvme0n3 00:11:22.606 [job3] 00:11:22.606 filename=/dev/nvme0n4 00:11:22.606 Could not set queue depth (nvme0n1) 00:11:22.606 Could not set queue depth (nvme0n2) 00:11:22.606 Could not set queue depth (nvme0n3) 00:11:22.606 Could not set queue depth (nvme0n4) 00:11:22.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.606 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.606 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.606 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.606 fio-3.35 00:11:22.606 Starting 4 threads 00:11:23.984 00:11:23.984 job0: (groupid=0, jobs=1): err= 0: pid=81114: Mon Dec 16 01:32:54 2024 00:11:23.984 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:11:23.984 slat (usec): min=6, max=7072, avg=155.39, stdev=637.94 00:11:23.984 clat (usec): min=14125, max=32364, avg=20092.26, stdev=3534.74 00:11:23.984 lat (usec): min=14140, max=33407, avg=20247.64, stdev=3590.73 00:11:23.984 clat percentiles (usec): 00:11:23.984 | 1.00th=[14353], 5.00th=[15533], 10.00th=[15795], 20.00th=[16057], 00:11:23.984 | 30.00th=[16909], 40.00th=[18482], 50.00th=[20579], 60.00th=[21890], 00:11:23.984 | 70.00th=[22414], 80.00th=[22938], 90.00th=[23987], 95.00th=[25822], 00:11:23.984 | 99.00th=[29230], 99.50th=[30540], 99.90th=[32375], 99.95th=[32375], 00:11:23.984 | 99.99th=[32375] 00:11:23.984 write: IOPS=3126, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1004msec); 0 zone resets 00:11:23.984 slat (usec): min=12, max=6545, avg=156.62, stdev=572.06 00:11:23.984 clat (usec): min=3218, max=40240, avg=20714.19, stdev=6643.48 00:11:23.984 lat (usec): min=3264, max=40264, avg=20870.82, stdev=6689.88 00:11:23.984 clat percentiles (usec): 00:11:23.984 | 1.00th=[10552], 5.00th=[12125], 10.00th=[12518], 20.00th=[13042], 00:11:23.984 | 30.00th=[13960], 40.00th=[20841], 50.00th=[22414], 60.00th=[23462], 00:11:23.984 | 70.00th=[23987], 80.00th=[24511], 90.00th=[27657], 95.00th=[33817], 00:11:23.984 | 99.00th=[38536], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:11:23.984 | 99.99th=[40109] 00:11:23.984 bw ( KiB/s): min=12184, max=12416, per=19.91%, avg=12300.00, stdev=164.05, samples=2 00:11:23.984 iops : min= 3046, max= 3104, avg=3075.00, stdev=41.01, samples=2 00:11:23.984 lat (msec) : 4=0.03%, 10=0.39%, 20=40.57%, 50=59.01% 00:11:23.984 cpu : usr=2.79%, sys=10.77%, ctx=377, majf=0, minf=3 00:11:23.984 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:23.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.984 issued rwts: total=3072,3139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.984 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.984 job1: (groupid=0, jobs=1): err= 0: pid=81115: Mon Dec 16 01:32:54 2024 00:11:23.984 read: IOPS=2315, BW=9260KiB/s (9482kB/s)(9288KiB/1003msec) 00:11:23.984 slat (usec): min=6, max=7385, avg=179.01, stdev=716.64 00:11:23.984 clat (usec): min=1849, max=49947, avg=20937.90, stdev=5626.64 00:11:23.984 lat (usec): min=4142, max=49963, avg=21116.91, stdev=5686.75 00:11:23.984 clat percentiles (usec): 00:11:23.984 | 1.00th=[ 6521], 5.00th=[13698], 10.00th=[15533], 20.00th=[15926], 00:11:23.984 | 30.00th=[17695], 40.00th=[20055], 50.00th=[21890], 60.00th=[22676], 00:11:23.984 | 70.00th=[22938], 80.00th=[23725], 90.00th=[26870], 95.00th=[29754], 00:11:23.984 | 99.00th=[36963], 99.50th=[44303], 99.90th=[50070], 99.95th=[50070], 00:11:23.984 | 99.99th=[50070] 00:11:23.984 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:11:23.984 slat (usec): min=14, max=9061, avg=218.65, stdev=717.87 00:11:23.984 clat (usec): min=15450, max=62847, avg=30212.02, stdev=11072.09 00:11:23.984 lat (usec): min=15521, max=62871, avg=30430.67, stdev=11146.68 00:11:23.984 clat percentiles (usec): 00:11:23.984 | 1.00th=[17957], 5.00th=[20317], 10.00th=[21365], 20.00th=[22152], 00:11:23.984 | 30.00th=[23462], 40.00th=[23725], 50.00th=[24249], 60.00th=[25297], 00:11:23.984 | 70.00th=[33817], 80.00th=[40633], 90.00th=[48497], 95.00th=[54264], 00:11:23.984 | 99.00th=[62129], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:11:23.984 | 99.99th=[62653] 00:11:23.984 bw ( KiB/s): min=10084, max=10416, per=16.60%, avg=10250.00, stdev=234.76, samples=2 00:11:23.984 iops : min= 2521, max= 2604, avg=2562.50, stdev=58.69, samples=2 00:11:23.985 lat (msec) : 2=0.02%, 10=1.31%, 20=19.64%, 50=74.52%, 100=4.51% 00:11:23.985 cpu : usr=2.99%, sys=8.38%, ctx=388, majf=0, minf=13 00:11:23.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:23.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.985 issued rwts: total=2322,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.985 job2: (groupid=0, jobs=1): err= 0: pid=81116: Mon Dec 16 01:32:54 2024 00:11:23.985 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:23.985 slat (usec): min=5, max=3877, avg=102.24, stdev=399.95 00:11:23.985 clat (usec): min=10297, max=17710, avg=13568.81, stdev=1035.27 00:11:23.985 lat (usec): min=10313, max=17907, avg=13671.05, stdev=1084.74 00:11:23.985 clat percentiles (usec): 00:11:23.985 | 1.00th=[10683], 5.00th=[11731], 10.00th=[12387], 20.00th=[13042], 00:11:23.985 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:11:23.985 | 70.00th=[13829], 80.00th=[14091], 90.00th=[15008], 95.00th=[15533], 00:11:23.985 | 99.00th=[16319], 99.50th=[16909], 99.90th=[16909], 99.95th=[17695], 00:11:23.985 | 99.99th=[17695] 00:11:23.985 write: IOPS=4897, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1002msec); 0 zone resets 00:11:23.985 slat (usec): min=11, max=3668, avg=99.24, stdev=437.93 00:11:23.985 clat (usec): min=216, max=17624, avg=13047.12, stdev=1451.47 00:11:23.985 lat (usec): min=3099, max=17642, avg=13146.36, stdev=1501.11 00:11:23.985 clat percentiles (usec): 00:11:23.985 | 1.00th=[ 7635], 5.00th=[11469], 10.00th=[11994], 20.00th=[12518], 00:11:23.985 | 30.00th=[12780], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:11:23.985 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14091], 95.00th=[15533], 00:11:23.985 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17171], 99.95th=[17433], 00:11:23.985 | 99.99th=[17695] 00:11:23.985 bw ( KiB/s): min=17752, max=20521, per=30.98%, avg=19136.50, stdev=1957.98, samples=2 00:11:23.985 iops : min= 4438, max= 5130, avg=4784.00, stdev=489.32, samples=2 00:11:23.985 lat (usec) : 250=0.01% 00:11:23.985 lat (msec) : 4=0.37%, 10=0.68%, 20=98.94% 00:11:23.985 cpu : usr=5.29%, sys=13.59%, ctx=440, majf=0, minf=9 00:11:23.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:23.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.985 issued rwts: total=4608,4907,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.985 job3: (groupid=0, jobs=1): err= 0: pid=81117: Mon Dec 16 01:32:54 2024 00:11:23.985 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:11:23.985 slat (usec): min=7, max=5405, avg=103.02, stdev=493.06 00:11:23.985 clat (usec): min=10139, max=16371, avg=13689.37, stdev=690.73 00:11:23.985 lat (usec): min=12487, max=16399, avg=13792.39, stdev=498.58 00:11:23.985 clat percentiles (usec): 00:11:23.985 | 1.00th=[10814], 5.00th=[13042], 10.00th=[13304], 20.00th=[13435], 00:11:23.985 | 30.00th=[13566], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:11:23.985 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14222], 95.00th=[14353], 00:11:23.985 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16319], 99.95th=[16319], 00:11:23.985 | 99.99th=[16319] 00:11:23.985 write: IOPS=4887, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1002msec); 0 zone resets 00:11:23.985 slat (usec): min=11, max=3239, avg=99.04, stdev=428.09 00:11:23.985 clat (usec): min=373, max=16421, avg=12959.01, stdev=1247.89 00:11:23.985 lat (usec): min=2863, max=16450, avg=13058.05, stdev=1170.31 00:11:23.985 clat percentiles (usec): 00:11:23.985 | 1.00th=[ 6456], 5.00th=[12125], 10.00th=[12518], 20.00th=[12780], 00:11:23.985 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:11:23.985 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13566], 95.00th=[13960], 00:11:23.985 | 99.00th=[16057], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:11:23.985 | 99.99th=[16450] 00:11:23.985 bw ( KiB/s): min=20480, max=20480, per=33.16%, avg=20480.00, stdev= 0.00, samples=1 00:11:23.985 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:23.985 lat (usec) : 500=0.01% 00:11:23.985 lat (msec) : 4=0.34%, 10=0.73%, 20=98.93% 00:11:23.985 cpu : usr=5.69%, sys=12.19%, ctx=298, majf=0, minf=10 00:11:23.985 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:23.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.985 issued rwts: total=4608,4897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.985 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.985 00:11:23.985 Run status group 0 (all jobs): 00:11:23.985 READ: bw=56.8MiB/s (59.6MB/s), 9260KiB/s-18.0MiB/s (9482kB/s-18.8MB/s), io=57.1MiB (59.8MB), run=1002-1004msec 00:11:23.985 WRITE: bw=60.3MiB/s (63.2MB/s), 9.97MiB/s-19.1MiB/s (10.5MB/s-20.1MB/s), io=60.6MiB (63.5MB), run=1002-1004msec 00:11:23.985 00:11:23.985 Disk stats (read/write): 00:11:23.985 nvme0n1: ios=2610/2783, merge=0/0, ticks=16604/17426, in_queue=34030, util=88.28% 00:11:23.985 nvme0n2: ios=2097/2151, merge=0/0, ticks=14368/20346, in_queue=34714, util=88.78% 00:11:23.985 nvme0n3: ios=4068/4096, merge=0/0, ticks=17529/15084, in_queue=32613, util=89.15% 00:11:23.985 nvme0n4: ios=4064/4096, merge=0/0, ticks=12513/11680, in_queue=24193, util=89.72% 00:11:23.985 01:32:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:23.985 [global] 00:11:23.985 thread=1 00:11:23.985 invalidate=1 00:11:23.985 rw=randwrite 00:11:23.985 time_based=1 00:11:23.985 runtime=1 00:11:23.985 ioengine=libaio 00:11:23.985 direct=1 00:11:23.985 bs=4096 00:11:23.985 iodepth=128 00:11:23.985 norandommap=0 00:11:23.985 numjobs=1 00:11:23.985 00:11:23.985 verify_dump=1 00:11:23.985 verify_backlog=512 00:11:23.985 verify_state_save=0 00:11:23.985 do_verify=1 00:11:23.985 verify=crc32c-intel 00:11:23.985 [job0] 00:11:23.985 filename=/dev/nvme0n1 00:11:23.985 [job1] 00:11:23.985 filename=/dev/nvme0n2 00:11:23.985 [job2] 00:11:23.985 filename=/dev/nvme0n3 00:11:23.985 [job3] 00:11:23.985 filename=/dev/nvme0n4 00:11:23.985 Could not set queue depth (nvme0n1) 00:11:23.985 Could not set queue depth (nvme0n2) 00:11:23.985 Could not set queue depth (nvme0n3) 00:11:23.985 Could not set queue depth (nvme0n4) 00:11:23.985 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.985 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.985 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.985 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:23.985 fio-3.35 00:11:23.985 Starting 4 threads 00:11:25.364 00:11:25.364 job0: (groupid=0, jobs=1): err= 0: pid=81172: Mon Dec 16 01:32:55 2024 00:11:25.364 read: IOPS=1731, BW=6924KiB/s (7090kB/s)(6952KiB/1004msec) 00:11:25.364 slat (usec): min=4, max=11601, avg=298.41, stdev=1087.99 00:11:25.364 clat (usec): min=1196, max=67567, avg=36730.33, stdev=9525.70 00:11:25.364 lat (usec): min=5615, max=67601, avg=37028.74, stdev=9531.34 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[ 7963], 5.00th=[23725], 10.00th=[26870], 20.00th=[31065], 00:11:25.364 | 30.00th=[33162], 40.00th=[33817], 50.00th=[35914], 60.00th=[38011], 00:11:25.364 | 70.00th=[39584], 80.00th=[41157], 90.00th=[47973], 95.00th=[57410], 00:11:25.364 | 99.00th=[66847], 99.50th=[66847], 99.90th=[66847], 99.95th=[67634], 00:11:25.364 | 99.99th=[67634] 00:11:25.364 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:11:25.364 slat (usec): min=5, max=18531, avg=228.61, stdev=1022.61 00:11:25.364 clat (usec): min=6950, max=48397, avg=29795.75, stdev=6525.19 00:11:25.364 lat (usec): min=6973, max=48423, avg=30024.36, stdev=6521.50 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[10421], 5.00th=[20317], 10.00th=[21627], 20.00th=[24249], 00:11:25.364 | 30.00th=[26346], 40.00th=[28705], 50.00th=[30802], 60.00th=[31851], 00:11:25.364 | 70.00th=[34341], 80.00th=[35390], 90.00th=[36963], 95.00th=[38011], 00:11:25.364 | 99.00th=[47973], 99.50th=[47973], 99.90th=[48497], 99.95th=[48497], 00:11:25.364 | 99.99th=[48497] 00:11:25.364 bw ( KiB/s): min= 8192, max= 8192, per=15.71%, avg=8192.00, stdev= 0.00, samples=2 00:11:25.364 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:11:25.364 lat (msec) : 2=0.03%, 10=0.77%, 20=3.43%, 50=92.13%, 100=3.65% 00:11:25.364 cpu : usr=1.40%, sys=6.08%, ctx=651, majf=0, minf=16 00:11:25.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:11:25.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.364 issued rwts: total=1738,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.364 job1: (groupid=0, jobs=1): err= 0: pid=81173: Mon Dec 16 01:32:55 2024 00:11:25.364 read: IOPS=5231, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:11:25.364 slat (usec): min=6, max=23813, avg=92.20, stdev=725.68 00:11:25.364 clat (usec): min=1246, max=61655, avg=12777.05, stdev=6598.00 00:11:25.364 lat (usec): min=5145, max=61678, avg=12869.24, stdev=6661.45 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[ 5997], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[ 9765], 00:11:25.364 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:11:25.364 | 70.00th=[10552], 80.00th=[15270], 90.00th=[20841], 95.00th=[25297], 00:11:25.364 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:11:25.364 | 99.99th=[61604] 00:11:25.364 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:25.364 slat (usec): min=10, max=11957, avg=84.62, stdev=532.09 00:11:25.364 clat (usec): min=4868, max=33910, avg=10661.27, stdev=3613.20 00:11:25.364 lat (usec): min=6712, max=33956, avg=10745.89, stdev=3604.94 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[ 6652], 5.00th=[ 8160], 10.00th=[ 8356], 20.00th=[ 8586], 00:11:25.364 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:11:25.364 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[17695], 95.00th=[19268], 00:11:25.364 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23200], 99.95th=[23200], 00:11:25.364 | 99.99th=[33817] 00:11:25.364 bw ( KiB/s): min=16384, max=28614, per=43.14%, avg=22499.00, stdev=8647.92, samples=2 00:11:25.364 iops : min= 4096, max= 7153, avg=5624.50, stdev=2161.63, samples=2 00:11:25.364 lat (msec) : 2=0.01%, 10=57.14%, 20=32.47%, 50=10.38%, 100=0.01% 00:11:25.364 cpu : usr=3.69%, sys=14.47%, ctx=220, majf=0, minf=5 00:11:25.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:25.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.364 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.364 job2: (groupid=0, jobs=1): err= 0: pid=81174: Mon Dec 16 01:32:55 2024 00:11:25.364 read: IOPS=1570, BW=6280KiB/s (6431kB/s)(6324KiB/1007msec) 00:11:25.364 slat (usec): min=5, max=24228, avg=304.03, stdev=1250.52 00:11:25.364 clat (usec): min=2017, max=71271, avg=38278.01, stdev=8975.69 00:11:25.364 lat (usec): min=6877, max=74100, avg=38582.03, stdev=9024.55 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[ 8029], 5.00th=[28967], 10.00th=[30802], 20.00th=[33162], 00:11:25.364 | 30.00th=[34341], 40.00th=[35390], 50.00th=[36963], 60.00th=[39060], 00:11:25.364 | 70.00th=[41157], 80.00th=[41681], 90.00th=[45876], 95.00th=[58459], 00:11:25.364 | 99.00th=[63177], 99.50th=[63177], 99.90th=[68682], 99.95th=[70779], 00:11:25.364 | 99.99th=[70779] 00:11:25.364 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:11:25.364 slat (usec): min=5, max=19259, avg=248.66, stdev=1100.05 00:11:25.364 clat (usec): min=17220, max=57394, avg=32378.13, stdev=5303.66 00:11:25.364 lat (usec): min=17237, max=58622, avg=32626.79, stdev=5319.16 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[18744], 5.00th=[24249], 10.00th=[25297], 20.00th=[28181], 00:11:25.364 | 30.00th=[29754], 40.00th=[31327], 50.00th=[32637], 60.00th=[33817], 00:11:25.364 | 70.00th=[34866], 80.00th=[36439], 90.00th=[38011], 95.00th=[41157], 00:11:25.364 | 99.00th=[50070], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:11:25.364 | 99.99th=[57410] 00:11:25.364 bw ( KiB/s): min= 7512, max= 8192, per=15.05%, avg=7852.00, stdev=480.83, samples=2 00:11:25.364 iops : min= 1878, max= 2048, avg=1963.00, stdev=120.21, samples=2 00:11:25.364 lat (msec) : 4=0.03%, 10=0.66%, 20=1.16%, 50=93.66%, 100=4.49% 00:11:25.364 cpu : usr=1.99%, sys=4.77%, ctx=671, majf=0, minf=11 00:11:25.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:11:25.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.364 issued rwts: total=1581,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.364 job3: (groupid=0, jobs=1): err= 0: pid=81175: Mon Dec 16 01:32:55 2024 00:11:25.364 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:11:25.364 slat (usec): min=4, max=7287, avg=154.94, stdev=607.56 00:11:25.364 clat (usec): min=8607, max=47377, avg=20400.94, stdev=11345.16 00:11:25.364 lat (usec): min=8626, max=47388, avg=20555.88, stdev=11423.76 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[11076], 20.00th=[11338], 00:11:25.364 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[15139], 00:11:25.364 | 70.00th=[32113], 80.00th=[33817], 90.00th=[36439], 95.00th=[38536], 00:11:25.364 | 99.00th=[42730], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:11:25.364 | 99.99th=[47449] 00:11:25.364 write: IOPS=3379, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1007msec); 0 zone resets 00:11:25.364 slat (usec): min=9, max=9661, avg=146.77, stdev=591.56 00:11:25.364 clat (usec): min=5277, max=38667, avg=18977.23, stdev=9813.85 00:11:25.364 lat (usec): min=6529, max=38683, avg=19124.00, stdev=9884.36 00:11:25.364 clat percentiles (usec): 00:11:25.364 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10421], 20.00th=[10814], 00:11:25.364 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12125], 60.00th=[14484], 00:11:25.364 | 70.00th=[27919], 80.00th=[31327], 90.00th=[33817], 95.00th=[34866], 00:11:25.364 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38011], 99.95th=[38536], 00:11:25.364 | 99.99th=[38536] 00:11:25.364 bw ( KiB/s): min= 6786, max=19408, per=25.11%, avg=13097.00, stdev=8925.10, samples=2 00:11:25.365 iops : min= 1696, max= 4852, avg=3274.00, stdev=2231.63, samples=2 00:11:25.365 lat (msec) : 10=3.86%, 20=57.48%, 50=38.66% 00:11:25.365 cpu : usr=2.68%, sys=9.34%, ctx=812, majf=0, minf=15 00:11:25.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:25.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.365 issued rwts: total=3072,3403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.365 00:11:25.365 Run status group 0 (all jobs): 00:11:25.365 READ: bw=45.1MiB/s (47.3MB/s), 6280KiB/s-20.4MiB/s (6431kB/s-21.4MB/s), io=45.5MiB (47.7MB), run=1003-1007msec 00:11:25.365 WRITE: bw=50.9MiB/s (53.4MB/s), 8135KiB/s-21.9MiB/s (8330kB/s-23.0MB/s), io=51.3MiB (53.8MB), run=1003-1007msec 00:11:25.365 00:11:25.365 Disk stats (read/write): 00:11:25.365 nvme0n1: ios=1586/1637, merge=0/0, ticks=20490/16799, in_queue=37289, util=85.96% 00:11:25.365 nvme0n2: ios=4397/4608, merge=0/0, ticks=55630/46550, in_queue=102180, util=89.37% 00:11:25.365 nvme0n3: ios=1492/1536, merge=0/0, ticks=20171/17546, in_queue=37717, util=88.87% 00:11:25.365 nvme0n4: ios=2865/3072, merge=0/0, ticks=14782/14253, in_queue=29035, util=89.73% 00:11:25.365 01:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:25.365 01:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=81188 00:11:25.365 01:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:25.365 01:32:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:25.365 [global] 00:11:25.365 thread=1 00:11:25.365 invalidate=1 00:11:25.365 rw=read 00:11:25.365 time_based=1 00:11:25.365 runtime=10 00:11:25.365 ioengine=libaio 00:11:25.365 direct=1 00:11:25.365 bs=4096 00:11:25.365 iodepth=1 00:11:25.365 norandommap=1 00:11:25.365 numjobs=1 00:11:25.365 00:11:25.365 [job0] 00:11:25.365 filename=/dev/nvme0n1 00:11:25.365 [job1] 00:11:25.365 filename=/dev/nvme0n2 00:11:25.365 [job2] 00:11:25.365 filename=/dev/nvme0n3 00:11:25.365 [job3] 00:11:25.365 filename=/dev/nvme0n4 00:11:25.365 Could not set queue depth (nvme0n1) 00:11:25.365 Could not set queue depth (nvme0n2) 00:11:25.365 Could not set queue depth (nvme0n3) 00:11:25.365 Could not set queue depth (nvme0n4) 00:11:25.365 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.365 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.365 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.365 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.365 fio-3.35 00:11:25.365 Starting 4 threads 00:11:28.649 01:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:28.649 fio: pid=81237, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.650 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42496000, buflen=4096 00:11:28.650 01:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:28.650 fio: pid=81236, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.650 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46551040, buflen=4096 00:11:28.650 01:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.650 01:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:28.908 fio: pid=81234, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.908 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=7303168, buflen=4096 00:11:29.166 01:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.166 01:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:29.425 fio: pid=81235, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.425 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7098368, buflen=4096 00:11:29.425 01:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.425 01:32:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:29.425 00:11:29.425 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=81234: Mon Dec 16 01:32:59 2024 00:11:29.425 read: IOPS=5145, BW=20.1MiB/s (21.1MB/s)(71.0MiB/3531msec) 00:11:29.425 slat (usec): min=8, max=11850, avg=15.61, stdev=148.41 00:11:29.425 clat (usec): min=126, max=1744, avg=177.30, stdev=40.84 00:11:29.425 lat (usec): min=138, max=12107, avg=192.92, stdev=154.40 00:11:29.425 clat percentiles (usec): 00:11:29.425 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:11:29.425 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:11:29.425 | 70.00th=[ 178], 80.00th=[ 194], 90.00th=[ 237], 95.00th=[ 251], 00:11:29.425 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 388], 99.95th=[ 562], 00:11:29.425 | 99.99th=[ 1631] 00:11:29.425 bw ( KiB/s): min=15680, max=22408, per=33.62%, avg=20321.33, stdev=2842.85, samples=6 00:11:29.425 iops : min= 3920, max= 5602, avg=5080.33, stdev=710.71, samples=6 00:11:29.425 lat (usec) : 250=94.61%, 500=5.32%, 750=0.03%, 1000=0.02% 00:11:29.425 lat (msec) : 2=0.02% 00:11:29.425 cpu : usr=1.76%, sys=6.09%, ctx=18176, majf=0, minf=1 00:11:29.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.425 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.425 issued rwts: total=18168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.425 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=81235: Mon Dec 16 01:32:59 2024 00:11:29.425 read: IOPS=4718, BW=18.4MiB/s (19.3MB/s)(70.8MiB/3840msec) 00:11:29.425 slat (usec): min=7, max=8788, avg=15.57, stdev=138.71 00:11:29.425 clat (usec): min=67, max=2241, avg=194.99, stdev=66.30 00:11:29.425 lat (usec): min=131, max=9028, avg=210.55, stdev=154.07 00:11:29.425 clat percentiles (usec): 00:11:29.425 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 155], 00:11:29.425 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 184], 00:11:29.425 | 70.00th=[ 204], 80.00th=[ 237], 90.00th=[ 262], 95.00th=[ 306], 00:11:29.425 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 758], 99.95th=[ 1401], 00:11:29.425 | 99.99th=[ 2114] 00:11:29.425 bw ( KiB/s): min=14336, max=21560, per=30.75%, avg=18586.43, stdev=2964.48, samples=7 00:11:29.425 iops : min= 3584, max= 5390, avg=4646.57, stdev=741.09, samples=7 00:11:29.425 lat (usec) : 100=0.01%, 250=86.26%, 500=13.52%, 750=0.10%, 1000=0.04% 00:11:29.425 lat (msec) : 2=0.06%, 4=0.01% 00:11:29.425 cpu : usr=1.51%, sys=5.31%, ctx=18126, majf=0, minf=2 00:11:29.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.425 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.425 issued rwts: total=18118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.425 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=81236: Mon Dec 16 01:32:59 2024 00:11:29.425 read: IOPS=3523, BW=13.8MiB/s (14.4MB/s)(44.4MiB/3226msec) 00:11:29.425 slat (usec): min=7, max=13684, avg=15.64, stdev=145.16 00:11:29.425 clat (usec): min=145, max=7235, avg=266.76, stdev=108.49 00:11:29.425 lat (usec): min=158, max=14002, avg=282.41, stdev=181.76 00:11:29.425 clat percentiles (usec): 00:11:29.425 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:11:29.425 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 269], 00:11:29.425 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 326], 00:11:29.425 | 99.00th=[ 371], 99.50th=[ 404], 99.90th=[ 914], 99.95th=[ 3392], 00:11:29.425 | 99.99th=[ 3982] 00:11:29.425 bw ( KiB/s): min=13048, max=15104, per=23.20%, avg=14024.00, stdev=767.80, samples=6 00:11:29.425 iops : min= 3262, max= 3776, avg=3506.00, stdev=191.95, samples=6 00:11:29.425 lat (usec) : 250=34.93%, 500=64.79%, 750=0.14%, 1000=0.04% 00:11:29.425 lat (msec) : 2=0.02%, 4=0.06%, 10=0.01% 00:11:29.425 cpu : usr=0.99%, sys=4.84%, ctx=11368, majf=0, minf=1 00:11:29.425 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.425 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.425 issued rwts: total=11366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.425 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.425 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=81237: Mon Dec 16 01:32:59 2024 00:11:29.425 read: IOPS=3547, BW=13.9MiB/s (14.5MB/s)(40.5MiB/2925msec) 00:11:29.426 slat (nsec): min=7464, max=83653, avg=13276.62, stdev=5595.22 00:11:29.426 clat (usec): min=156, max=2287, avg=267.30, stdev=42.09 00:11:29.426 lat (usec): min=181, max=2314, avg=280.58, stdev=42.24 00:11:29.426 clat percentiles (usec): 00:11:29.426 | 1.00th=[ 210], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 243], 00:11:29.426 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:11:29.426 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 330], 00:11:29.426 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 523], 99.95th=[ 627], 00:11:29.426 | 99.99th=[ 1729] 00:11:29.426 bw ( KiB/s): min=13216, max=14384, per=23.15%, avg=13990.40, stdev=467.00, samples=5 00:11:29.426 iops : min= 3304, max= 3596, avg=3497.60, stdev=116.75, samples=5 00:11:29.426 lat (usec) : 250=29.17%, 500=70.67%, 750=0.11%, 1000=0.01% 00:11:29.426 lat (msec) : 2=0.02%, 4=0.01% 00:11:29.426 cpu : usr=1.09%, sys=4.51%, ctx=10377, majf=0, minf=2 00:11:29.426 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.426 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.426 issued rwts: total=10376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.426 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.426 00:11:29.426 Run status group 0 (all jobs): 00:11:29.426 READ: bw=59.0MiB/s (61.9MB/s), 13.8MiB/s-20.1MiB/s (14.4MB/s-21.1MB/s), io=227MiB (238MB), run=2925-3840msec 00:11:29.426 00:11:29.426 Disk stats (read/write): 00:11:29.426 nvme0n1: ios=17206/0, merge=0/0, ticks=3097/0, in_queue=3097, util=95.33% 00:11:29.426 nvme0n2: ios=16787/0, merge=0/0, ticks=3252/0, in_queue=3252, util=95.66% 00:11:29.426 nvme0n3: ios=10942/0, merge=0/0, ticks=2802/0, in_queue=2802, util=96.09% 00:11:29.426 nvme0n4: ios=10154/0, merge=0/0, ticks=2548/0, in_queue=2548, util=96.83% 00:11:29.684 01:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.684 01:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:29.941 01:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.941 01:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:30.238 01:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.238 01:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:30.497 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.497 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 81188 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:30.755 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.756 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:30.756 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.756 nvmf hotplug test: fio failed as expected 00:11:30.756 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:30.756 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:30.756 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:30.756 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.015 rmmod nvme_tcp 00:11:31.015 rmmod nvme_fabrics 00:11:31.015 rmmod nvme_keyring 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 80814 ']' 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 80814 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 80814 ']' 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 80814 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80814 00:11:31.015 killing process with pid 80814 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80814' 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 80814 00:11:31.015 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 80814 00:11:31.273 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.273 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:31.274 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:31.533 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:31.533 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:31.533 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:31.533 00:11:31.533 real 0m19.455s 00:11:31.533 user 1m12.811s 00:11:31.533 sys 0m10.341s 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.533 ************************************ 00:11:31.533 END TEST nvmf_fio_target 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.533 ************************************ 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:31.533 ************************************ 00:11:31.533 START TEST nvmf_bdevio 00:11:31.533 ************************************ 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:31.533 * Looking for test storage... 00:11:31.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:31.533 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:31.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.793 --rc genhtml_branch_coverage=1 00:11:31.793 --rc genhtml_function_coverage=1 00:11:31.793 --rc genhtml_legend=1 00:11:31.793 --rc geninfo_all_blocks=1 00:11:31.793 --rc geninfo_unexecuted_blocks=1 00:11:31.793 00:11:31.793 ' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:31.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.793 --rc genhtml_branch_coverage=1 00:11:31.793 --rc genhtml_function_coverage=1 00:11:31.793 --rc genhtml_legend=1 00:11:31.793 --rc geninfo_all_blocks=1 00:11:31.793 --rc geninfo_unexecuted_blocks=1 00:11:31.793 00:11:31.793 ' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:31.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.793 --rc genhtml_branch_coverage=1 00:11:31.793 --rc genhtml_function_coverage=1 00:11:31.793 --rc genhtml_legend=1 00:11:31.793 --rc geninfo_all_blocks=1 00:11:31.793 --rc geninfo_unexecuted_blocks=1 00:11:31.793 00:11:31.793 ' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:31.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.793 --rc genhtml_branch_coverage=1 00:11:31.793 --rc genhtml_function_coverage=1 00:11:31.793 --rc genhtml_legend=1 00:11:31.793 --rc geninfo_all_blocks=1 00:11:31.793 --rc geninfo_unexecuted_blocks=1 00:11:31.793 00:11:31.793 ' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.793 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.793 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:31.794 Cannot find device "nvmf_init_br" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:31.794 Cannot find device "nvmf_init_br2" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:31.794 Cannot find device "nvmf_tgt_br" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:31.794 Cannot find device "nvmf_tgt_br2" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:31.794 Cannot find device "nvmf_init_br" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:31.794 Cannot find device "nvmf_init_br2" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:31.794 Cannot find device "nvmf_tgt_br" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:31.794 Cannot find device "nvmf_tgt_br2" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:31.794 Cannot find device "nvmf_br" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:31.794 Cannot find device "nvmf_init_if" 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:31.794 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:32.053 Cannot find device "nvmf_init_if2" 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:32.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:32.053 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:32.053 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:32.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:32.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:11:32.054 00:11:32.054 --- 10.0.0.3 ping statistics --- 00:11:32.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.054 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:32.054 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:32.054 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:11:32.054 00:11:32.054 --- 10.0.0.4 ping statistics --- 00:11:32.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.054 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:32.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:32.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:32.054 00:11:32.054 --- 10.0.0.1 ping statistics --- 00:11:32.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.054 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:32.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:32.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:11:32.054 00:11:32.054 --- 10.0.0.2 ping statistics --- 00:11:32.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:32.054 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=81559 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 81559 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 81559 ']' 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.054 01:33:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.313 [2024-12-16 01:33:02.764203] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:32.313 [2024-12-16 01:33:02.764312] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.313 [2024-12-16 01:33:02.918066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:32.313 [2024-12-16 01:33:02.938577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.313 [2024-12-16 01:33:02.938790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.313 [2024-12-16 01:33:02.938867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.313 [2024-12-16 01:33:02.938960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.313 [2024-12-16 01:33:02.939033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.313 [2024-12-16 01:33:02.939862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:11:32.313 [2024-12-16 01:33:02.940010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:11:32.313 [2024-12-16 01:33:02.940122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:11:32.313 [2024-12-16 01:33:02.940130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:32.313 [2024-12-16 01:33:02.968656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.572 [2024-12-16 01:33:03.066379] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.572 Malloc0 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:32.572 [2024-12-16 01:33:03.128966] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:32.572 { 00:11:32.572 "params": { 00:11:32.572 "name": "Nvme$subsystem", 00:11:32.572 "trtype": "$TEST_TRANSPORT", 00:11:32.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:32.572 "adrfam": "ipv4", 00:11:32.572 "trsvcid": "$NVMF_PORT", 00:11:32.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:32.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:32.572 "hdgst": ${hdgst:-false}, 00:11:32.572 "ddgst": ${ddgst:-false} 00:11:32.572 }, 00:11:32.572 "method": "bdev_nvme_attach_controller" 00:11:32.572 } 00:11:32.572 EOF 00:11:32.572 )") 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:32.572 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:32.572 "params": { 00:11:32.572 "name": "Nvme1", 00:11:32.572 "trtype": "tcp", 00:11:32.573 "traddr": "10.0.0.3", 00:11:32.573 "adrfam": "ipv4", 00:11:32.573 "trsvcid": "4420", 00:11:32.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:32.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:32.573 "hdgst": false, 00:11:32.573 "ddgst": false 00:11:32.573 }, 00:11:32.573 "method": "bdev_nvme_attach_controller" 00:11:32.573 }' 00:11:32.573 [2024-12-16 01:33:03.194774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:32.573 [2024-12-16 01:33:03.194864] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81588 ] 00:11:32.831 [2024-12-16 01:33:03.349385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.832 [2024-12-16 01:33:03.377440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.832 [2024-12-16 01:33:03.377489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.832 [2024-12-16 01:33:03.377494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.832 [2024-12-16 01:33:03.421093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:33.091 I/O targets: 00:11:33.091 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:33.091 00:11:33.091 00:11:33.091 CUnit - A unit testing framework for C - Version 2.1-3 00:11:33.091 http://cunit.sourceforge.net/ 00:11:33.091 00:11:33.091 00:11:33.091 Suite: bdevio tests on: Nvme1n1 00:11:33.091 Test: blockdev write read block ...passed 00:11:33.091 Test: blockdev write zeroes read block ...passed 00:11:33.091 Test: blockdev write zeroes read no split ...passed 00:11:33.091 Test: blockdev write zeroes read split ...passed 00:11:33.091 Test: blockdev write zeroes read split partial ...passed 00:11:33.091 Test: blockdev reset ...[2024-12-16 01:33:03.552732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:33.091 [2024-12-16 01:33:03.552857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17def20 (9): Bad file descriptor 00:11:33.091 [2024-12-16 01:33:03.571301] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:33.091 passed 00:11:33.091 Test: blockdev write read 8 blocks ...passed 00:11:33.091 Test: blockdev write read size > 128k ...passed 00:11:33.091 Test: blockdev write read invalid size ...passed 00:11:33.091 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:33.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:33.091 Test: blockdev write read max offset ...passed 00:11:33.091 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:33.091 Test: blockdev writev readv 8 blocks ...passed 00:11:33.091 Test: blockdev writev readv 30 x 1block ...passed 00:11:33.091 Test: blockdev writev readv block ...passed 00:11:33.091 Test: blockdev writev readv size > 128k ...passed 00:11:33.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:33.091 Test: blockdev comparev and writev ...[2024-12-16 01:33:03.580601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.580653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.580677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.580690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.581159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.581192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.581212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.581224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.581617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.581649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.581668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.581680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.581976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.582008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.582027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:33.091 [2024-12-16 01:33:03.582038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:33.091 passed 00:11:33.091 Test: blockdev nvme passthru rw ...passed 00:11:33.091 Test: blockdev nvme passthru vendor specific ...[2024-12-16 01:33:03.583265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.091 [2024-12-16 01:33:03.583363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.583750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.091 [2024-12-16 01:33:03.583782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.583948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.091 [2024-12-16 01:33:03.583978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:33.091 [2024-12-16 01:33:03.584251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:33.092 [2024-12-16 01:33:03.584282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:33.092 passed 00:11:33.092 Test: blockdev nvme admin passthru ...passed 00:11:33.092 Test: blockdev copy ...passed 00:11:33.092 00:11:33.092 Run Summary: Type Total Ran Passed Failed Inactive 00:11:33.092 suites 1 1 n/a 0 0 00:11:33.092 tests 23 23 23 0 0 00:11:33.092 asserts 152 152 152 0 n/a 00:11:33.092 00:11:33.092 Elapsed time = 0.158 seconds 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.092 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.351 rmmod nvme_tcp 00:11:33.351 rmmod nvme_fabrics 00:11:33.351 rmmod nvme_keyring 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 81559 ']' 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 81559 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 81559 ']' 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 81559 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81559 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:33.351 killing process with pid 81559 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81559' 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 81559 00:11:33.351 01:33:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 81559 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.611 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.870 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:33.870 00:11:33.870 real 0m2.189s 00:11:33.870 user 0m5.548s 00:11:33.870 sys 0m0.745s 00:11:33.870 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.870 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:33.870 ************************************ 00:11:33.870 END TEST nvmf_bdevio 00:11:33.870 ************************************ 00:11:33.870 01:33:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:33.870 00:11:33.870 real 2m28.363s 00:11:33.870 user 6m25.394s 00:11:33.870 sys 0m53.388s 00:11:33.870 01:33:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.870 01:33:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.870 ************************************ 00:11:33.870 END TEST nvmf_target_core 00:11:33.870 ************************************ 00:11:33.870 01:33:04 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:33.870 01:33:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.870 01:33:04 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.870 01:33:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:33.870 ************************************ 00:11:33.870 START TEST nvmf_target_extra 00:11:33.870 ************************************ 00:11:33.870 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:33.870 * Looking for test storage... 00:11:33.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:33.871 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:33.871 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:33.871 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.131 --rc genhtml_branch_coverage=1 00:11:34.131 --rc genhtml_function_coverage=1 00:11:34.131 --rc genhtml_legend=1 00:11:34.131 --rc geninfo_all_blocks=1 00:11:34.131 --rc geninfo_unexecuted_blocks=1 00:11:34.131 00:11:34.131 ' 00:11:34.131 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.132 --rc genhtml_branch_coverage=1 00:11:34.132 --rc genhtml_function_coverage=1 00:11:34.132 --rc genhtml_legend=1 00:11:34.132 --rc geninfo_all_blocks=1 00:11:34.132 --rc geninfo_unexecuted_blocks=1 00:11:34.132 00:11:34.132 ' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.132 --rc genhtml_branch_coverage=1 00:11:34.132 --rc genhtml_function_coverage=1 00:11:34.132 --rc genhtml_legend=1 00:11:34.132 --rc geninfo_all_blocks=1 00:11:34.132 --rc geninfo_unexecuted_blocks=1 00:11:34.132 00:11:34.132 ' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.132 --rc genhtml_branch_coverage=1 00:11:34.132 --rc genhtml_function_coverage=1 00:11:34.132 --rc genhtml_legend=1 00:11:34.132 --rc geninfo_all_blocks=1 00:11:34.132 --rc geninfo_unexecuted_blocks=1 00:11:34.132 00:11:34.132 ' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.132 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.132 ************************************ 00:11:34.132 START TEST nvmf_auth_target 00:11:34.132 ************************************ 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:34.132 * Looking for test storage... 00:11:34.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.132 --rc genhtml_branch_coverage=1 00:11:34.132 --rc genhtml_function_coverage=1 00:11:34.132 --rc genhtml_legend=1 00:11:34.132 --rc geninfo_all_blocks=1 00:11:34.132 --rc geninfo_unexecuted_blocks=1 00:11:34.132 00:11:34.132 ' 00:11:34.132 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.132 --rc genhtml_branch_coverage=1 00:11:34.132 --rc genhtml_function_coverage=1 00:11:34.132 --rc genhtml_legend=1 00:11:34.132 --rc geninfo_all_blocks=1 00:11:34.132 --rc geninfo_unexecuted_blocks=1 00:11:34.133 00:11:34.133 ' 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.133 --rc genhtml_branch_coverage=1 00:11:34.133 --rc genhtml_function_coverage=1 00:11:34.133 --rc genhtml_legend=1 00:11:34.133 --rc geninfo_all_blocks=1 00:11:34.133 --rc geninfo_unexecuted_blocks=1 00:11:34.133 00:11:34.133 ' 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.133 --rc genhtml_branch_coverage=1 00:11:34.133 --rc genhtml_function_coverage=1 00:11:34.133 --rc genhtml_legend=1 00:11:34.133 --rc geninfo_all_blocks=1 00:11:34.133 --rc geninfo_unexecuted_blocks=1 00:11:34.133 00:11:34.133 ' 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.133 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:34.133 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:34.392 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:34.392 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.392 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.392 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:34.393 Cannot find device "nvmf_init_br" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:34.393 Cannot find device "nvmf_init_br2" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:34.393 Cannot find device "nvmf_tgt_br" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:34.393 Cannot find device "nvmf_tgt_br2" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:34.393 Cannot find device "nvmf_init_br" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:34.393 Cannot find device "nvmf_init_br2" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:34.393 Cannot find device "nvmf_tgt_br" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:34.393 Cannot find device "nvmf_tgt_br2" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:34.393 Cannot find device "nvmf_br" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:34.393 Cannot find device "nvmf_init_if" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:34.393 Cannot find device "nvmf_init_if2" 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.393 01:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.393 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.393 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:34.393 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:34.393 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:34.393 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:34.393 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:34.393 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:34.652 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.652 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:11:34.652 00:11:34.652 --- 10.0.0.3 ping statistics --- 00:11:34.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.652 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:34.652 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:34.652 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:11:34.652 00:11:34.652 --- 10.0.0.4 ping statistics --- 00:11:34.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.652 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:34.652 00:11:34.652 --- 10.0.0.1 ping statistics --- 00:11:34.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.652 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:34.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:34.652 00:11:34.652 --- 10.0.0.2 ping statistics --- 00:11:34.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.652 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.652 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81871 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81871 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81871 ']' 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.653 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.911 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.911 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:34.911 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:34.912 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.912 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=81890 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0cbe0b0eaf56d500529e4758ae92d66d11423827b7c17563 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cbW 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0cbe0b0eaf56d500529e4758ae92d66d11423827b7c17563 0 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0cbe0b0eaf56d500529e4758ae92d66d11423827b7c17563 0 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0cbe0b0eaf56d500529e4758ae92d66d11423827b7c17563 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cbW 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cbW 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cbW 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4b7904125d630e4b5cce5b53ff3ceb96a7dab93f1b3a933e3f5d80bf70a701e2 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6ae 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4b7904125d630e4b5cce5b53ff3ceb96a7dab93f1b3a933e3f5d80bf70a701e2 3 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4b7904125d630e4b5cce5b53ff3ceb96a7dab93f1b3a933e3f5d80bf70a701e2 3 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4b7904125d630e4b5cce5b53ff3ceb96a7dab93f1b3a933e3f5d80bf70a701e2 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6ae 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6ae 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.6ae 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b396512c763c0d18d5a299d3e4a2763a 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.VaE 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b396512c763c0d18d5a299d3e4a2763a 1 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b396512c763c0d18d5a299d3e4a2763a 1 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b396512c763c0d18d5a299d3e4a2763a 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.VaE 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.VaE 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.VaE 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:35.171 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=73ef415de718754b8aab5e23720419ac89c10bc472c96f89 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vJR 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 73ef415de718754b8aab5e23720419ac89c10bc472c96f89 2 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 73ef415de718754b8aab5e23720419ac89c10bc472c96f89 2 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=73ef415de718754b8aab5e23720419ac89c10bc472c96f89 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:35.172 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vJR 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vJR 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.vJR 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b2401a699837f86641a059c253d528bbc3884d027e616a0 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.5sH 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b2401a699837f86641a059c253d528bbc3884d027e616a0 2 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b2401a699837f86641a059c253d528bbc3884d027e616a0 2 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b2401a699837f86641a059c253d528bbc3884d027e616a0 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.5sH 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.5sH 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.5sH 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=494c5438e3fed4d4b22824d4ef426694 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Rdf 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 494c5438e3fed4d4b22824d4ef426694 1 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 494c5438e3fed4d4b22824d4ef426694 1 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=494c5438e3fed4d4b22824d4ef426694 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Rdf 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Rdf 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Rdf 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:35.431 01:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=575275f8726b43aad1e0f74a59c114a4758ed722b4019f9e062eb48012c37930 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.C0o 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 575275f8726b43aad1e0f74a59c114a4758ed722b4019f9e062eb48012c37930 3 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 575275f8726b43aad1e0f74a59c114a4758ed722b4019f9e062eb48012c37930 3 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:35.431 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=575275f8726b43aad1e0f74a59c114a4758ed722b4019f9e062eb48012c37930 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.C0o 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.C0o 00:11:35.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.C0o 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 81871 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81871 ']' 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.432 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 81890 /var/tmp/host.sock 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81890 ']' 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.000 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cbW 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cbW 00:11:36.259 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cbW 00:11:36.518 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.6ae ]] 00:11:36.518 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6ae 00:11:36.518 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.518 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.518 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.518 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6ae 00:11:36.518 01:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6ae 00:11:36.777 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:36.777 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VaE 00:11:36.777 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.777 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.777 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.777 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VaE 00:11:36.777 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VaE 00:11:37.036 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.vJR ]] 00:11:37.036 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vJR 00:11:37.036 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.036 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.036 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.036 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vJR 00:11:37.036 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vJR 00:11:37.295 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:37.295 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5sH 00:11:37.295 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.295 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.295 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.295 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5sH 00:11:37.295 01:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5sH 00:11:37.554 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Rdf ]] 00:11:37.554 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Rdf 00:11:37.554 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.554 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.554 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.554 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Rdf 00:11:37.554 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Rdf 00:11:37.813 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:37.813 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.C0o 00:11:37.813 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.813 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.813 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.813 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.C0o 00:11:37.813 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.C0o 00:11:38.072 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:38.072 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:38.072 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:38.072 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.072 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.072 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.331 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.332 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.332 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.332 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.332 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.332 01:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.590 00:11:38.590 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.590 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.590 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.160 { 00:11:39.160 "cntlid": 1, 00:11:39.160 "qid": 0, 00:11:39.160 "state": "enabled", 00:11:39.160 "thread": "nvmf_tgt_poll_group_000", 00:11:39.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:39.160 "listen_address": { 00:11:39.160 "trtype": "TCP", 00:11:39.160 "adrfam": "IPv4", 00:11:39.160 "traddr": "10.0.0.3", 00:11:39.160 "trsvcid": "4420" 00:11:39.160 }, 00:11:39.160 "peer_address": { 00:11:39.160 "trtype": "TCP", 00:11:39.160 "adrfam": "IPv4", 00:11:39.160 "traddr": "10.0.0.1", 00:11:39.160 "trsvcid": "42154" 00:11:39.160 }, 00:11:39.160 "auth": { 00:11:39.160 "state": "completed", 00:11:39.160 "digest": "sha256", 00:11:39.160 "dhgroup": "null" 00:11:39.160 } 00:11:39.160 } 00:11:39.160 ]' 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.160 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.419 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:11:39.419 01:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.609 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.889 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.158 00:11:44.158 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.158 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.158 01:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.417 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.417 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.417 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.417 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.676 { 00:11:44.676 "cntlid": 3, 00:11:44.676 "qid": 0, 00:11:44.676 "state": "enabled", 00:11:44.676 "thread": "nvmf_tgt_poll_group_000", 00:11:44.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:44.676 "listen_address": { 00:11:44.676 "trtype": "TCP", 00:11:44.676 "adrfam": "IPv4", 00:11:44.676 "traddr": "10.0.0.3", 00:11:44.676 "trsvcid": "4420" 00:11:44.676 }, 00:11:44.676 "peer_address": { 00:11:44.676 "trtype": "TCP", 00:11:44.676 "adrfam": "IPv4", 00:11:44.676 "traddr": "10.0.0.1", 00:11:44.676 "trsvcid": "42176" 00:11:44.676 }, 00:11:44.676 "auth": { 00:11:44.676 "state": "completed", 00:11:44.676 "digest": "sha256", 00:11:44.676 "dhgroup": "null" 00:11:44.676 } 00:11:44.676 } 00:11:44.676 ]' 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.676 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.934 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:11:44.934 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:11:45.869 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.870 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.437 00:11:46.437 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.437 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.437 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.695 { 00:11:46.695 "cntlid": 5, 00:11:46.695 "qid": 0, 00:11:46.695 "state": "enabled", 00:11:46.695 "thread": "nvmf_tgt_poll_group_000", 00:11:46.695 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:46.695 "listen_address": { 00:11:46.695 "trtype": "TCP", 00:11:46.695 "adrfam": "IPv4", 00:11:46.695 "traddr": "10.0.0.3", 00:11:46.695 "trsvcid": "4420" 00:11:46.695 }, 00:11:46.695 "peer_address": { 00:11:46.695 "trtype": "TCP", 00:11:46.695 "adrfam": "IPv4", 00:11:46.695 "traddr": "10.0.0.1", 00:11:46.695 "trsvcid": "42210" 00:11:46.695 }, 00:11:46.695 "auth": { 00:11:46.695 "state": "completed", 00:11:46.695 "digest": "sha256", 00:11:46.695 "dhgroup": "null" 00:11:46.695 } 00:11:46.695 } 00:11:46.695 ]' 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.695 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.954 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:11:46.954 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:47.522 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.781 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.349 00:11:48.349 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.349 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.349 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.608 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.608 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.609 { 00:11:48.609 "cntlid": 7, 00:11:48.609 "qid": 0, 00:11:48.609 "state": "enabled", 00:11:48.609 "thread": "nvmf_tgt_poll_group_000", 00:11:48.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:48.609 "listen_address": { 00:11:48.609 "trtype": "TCP", 00:11:48.609 "adrfam": "IPv4", 00:11:48.609 "traddr": "10.0.0.3", 00:11:48.609 "trsvcid": "4420" 00:11:48.609 }, 00:11:48.609 "peer_address": { 00:11:48.609 "trtype": "TCP", 00:11:48.609 "adrfam": "IPv4", 00:11:48.609 "traddr": "10.0.0.1", 00:11:48.609 "trsvcid": "34036" 00:11:48.609 }, 00:11:48.609 "auth": { 00:11:48.609 "state": "completed", 00:11:48.609 "digest": "sha256", 00:11:48.609 "dhgroup": "null" 00:11:48.609 } 00:11:48.609 } 00:11:48.609 ]' 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.609 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.868 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:11:48.868 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.804 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.064 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.323 00:11:50.323 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.323 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.323 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.582 { 00:11:50.582 "cntlid": 9, 00:11:50.582 "qid": 0, 00:11:50.582 "state": "enabled", 00:11:50.582 "thread": "nvmf_tgt_poll_group_000", 00:11:50.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:50.582 "listen_address": { 00:11:50.582 "trtype": "TCP", 00:11:50.582 "adrfam": "IPv4", 00:11:50.582 "traddr": "10.0.0.3", 00:11:50.582 "trsvcid": "4420" 00:11:50.582 }, 00:11:50.582 "peer_address": { 00:11:50.582 "trtype": "TCP", 00:11:50.582 "adrfam": "IPv4", 00:11:50.582 "traddr": "10.0.0.1", 00:11:50.582 "trsvcid": "34058" 00:11:50.582 }, 00:11:50.582 "auth": { 00:11:50.582 "state": "completed", 00:11:50.582 "digest": "sha256", 00:11:50.582 "dhgroup": "ffdhe2048" 00:11:50.582 } 00:11:50.582 } 00:11:50.582 ]' 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.582 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.841 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:11:50.841 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:51.777 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.036 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.295 00:11:52.295 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.295 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.295 01:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.554 { 00:11:52.554 "cntlid": 11, 00:11:52.554 "qid": 0, 00:11:52.554 "state": "enabled", 00:11:52.554 "thread": "nvmf_tgt_poll_group_000", 00:11:52.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:52.554 "listen_address": { 00:11:52.554 "trtype": "TCP", 00:11:52.554 "adrfam": "IPv4", 00:11:52.554 "traddr": "10.0.0.3", 00:11:52.554 "trsvcid": "4420" 00:11:52.554 }, 00:11:52.554 "peer_address": { 00:11:52.554 "trtype": "TCP", 00:11:52.554 "adrfam": "IPv4", 00:11:52.554 "traddr": "10.0.0.1", 00:11:52.554 "trsvcid": "34074" 00:11:52.554 }, 00:11:52.554 "auth": { 00:11:52.554 "state": "completed", 00:11:52.554 "digest": "sha256", 00:11:52.554 "dhgroup": "ffdhe2048" 00:11:52.554 } 00:11:52.554 } 00:11:52.554 ]' 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.554 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.814 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:52.814 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.814 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.814 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.814 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.073 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:11:53.073 01:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.641 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:53.900 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.468 00:11:54.468 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.468 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.468 01:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.468 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.468 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.468 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.468 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.726 { 00:11:54.726 "cntlid": 13, 00:11:54.726 "qid": 0, 00:11:54.726 "state": "enabled", 00:11:54.726 "thread": "nvmf_tgt_poll_group_000", 00:11:54.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:54.726 "listen_address": { 00:11:54.726 "trtype": "TCP", 00:11:54.726 "adrfam": "IPv4", 00:11:54.726 "traddr": "10.0.0.3", 00:11:54.726 "trsvcid": "4420" 00:11:54.726 }, 00:11:54.726 "peer_address": { 00:11:54.726 "trtype": "TCP", 00:11:54.726 "adrfam": "IPv4", 00:11:54.726 "traddr": "10.0.0.1", 00:11:54.726 "trsvcid": "34100" 00:11:54.726 }, 00:11:54.726 "auth": { 00:11:54.726 "state": "completed", 00:11:54.726 "digest": "sha256", 00:11:54.726 "dhgroup": "ffdhe2048" 00:11:54.726 } 00:11:54.726 } 00:11:54.726 ]' 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.726 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.985 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:11:54.985 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:55.552 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:55.813 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.385 00:11:56.385 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:56.385 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:56.385 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.385 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.385 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.385 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.385 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.644 { 00:11:56.644 "cntlid": 15, 00:11:56.644 "qid": 0, 00:11:56.644 "state": "enabled", 00:11:56.644 "thread": "nvmf_tgt_poll_group_000", 00:11:56.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:56.644 "listen_address": { 00:11:56.644 "trtype": "TCP", 00:11:56.644 "adrfam": "IPv4", 00:11:56.644 "traddr": "10.0.0.3", 00:11:56.644 "trsvcid": "4420" 00:11:56.644 }, 00:11:56.644 "peer_address": { 00:11:56.644 "trtype": "TCP", 00:11:56.644 "adrfam": "IPv4", 00:11:56.644 "traddr": "10.0.0.1", 00:11:56.644 "trsvcid": "34134" 00:11:56.644 }, 00:11:56.644 "auth": { 00:11:56.644 "state": "completed", 00:11:56.644 "digest": "sha256", 00:11:56.644 "dhgroup": "ffdhe2048" 00:11:56.644 } 00:11:56.644 } 00:11:56.644 ]' 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.644 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.903 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:11:56.903 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:57.841 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.100 00:11:58.100 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:58.100 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:58.100 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:58.669 { 00:11:58.669 "cntlid": 17, 00:11:58.669 "qid": 0, 00:11:58.669 "state": "enabled", 00:11:58.669 "thread": "nvmf_tgt_poll_group_000", 00:11:58.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:11:58.669 "listen_address": { 00:11:58.669 "trtype": "TCP", 00:11:58.669 "adrfam": "IPv4", 00:11:58.669 "traddr": "10.0.0.3", 00:11:58.669 "trsvcid": "4420" 00:11:58.669 }, 00:11:58.669 "peer_address": { 00:11:58.669 "trtype": "TCP", 00:11:58.669 "adrfam": "IPv4", 00:11:58.669 "traddr": "10.0.0.1", 00:11:58.669 "trsvcid": "54662" 00:11:58.669 }, 00:11:58.669 "auth": { 00:11:58.669 "state": "completed", 00:11:58.669 "digest": "sha256", 00:11:58.669 "dhgroup": "ffdhe3072" 00:11:58.669 } 00:11:58.669 } 00:11:58.669 ]' 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.669 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.927 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:11:58.927 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.497 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:59.756 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.015 00:12:00.015 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:00.015 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.015 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.274 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.274 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.274 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.274 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.533 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.533 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.533 { 00:12:00.533 "cntlid": 19, 00:12:00.533 "qid": 0, 00:12:00.533 "state": "enabled", 00:12:00.533 "thread": "nvmf_tgt_poll_group_000", 00:12:00.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:00.533 "listen_address": { 00:12:00.533 "trtype": "TCP", 00:12:00.533 "adrfam": "IPv4", 00:12:00.533 "traddr": "10.0.0.3", 00:12:00.533 "trsvcid": "4420" 00:12:00.533 }, 00:12:00.533 "peer_address": { 00:12:00.533 "trtype": "TCP", 00:12:00.533 "adrfam": "IPv4", 00:12:00.533 "traddr": "10.0.0.1", 00:12:00.533 "trsvcid": "54690" 00:12:00.533 }, 00:12:00.533 "auth": { 00:12:00.533 "state": "completed", 00:12:00.533 "digest": "sha256", 00:12:00.533 "dhgroup": "ffdhe3072" 00:12:00.533 } 00:12:00.533 } 00:12:00.533 ]' 00:12:00.533 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.533 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:00.533 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.533 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:00.533 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.533 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.533 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.533 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.793 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:00.793 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:01.361 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.361 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:01.361 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.361 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.361 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.361 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.361 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.361 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:01.928 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.187 00:12:02.187 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.187 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.187 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.445 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.445 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.445 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.445 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.445 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.445 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.445 { 00:12:02.445 "cntlid": 21, 00:12:02.445 "qid": 0, 00:12:02.445 "state": "enabled", 00:12:02.445 "thread": "nvmf_tgt_poll_group_000", 00:12:02.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:02.445 "listen_address": { 00:12:02.445 "trtype": "TCP", 00:12:02.445 "adrfam": "IPv4", 00:12:02.445 "traddr": "10.0.0.3", 00:12:02.445 "trsvcid": "4420" 00:12:02.445 }, 00:12:02.445 "peer_address": { 00:12:02.445 "trtype": "TCP", 00:12:02.445 "adrfam": "IPv4", 00:12:02.445 "traddr": "10.0.0.1", 00:12:02.445 "trsvcid": "54716" 00:12:02.445 }, 00:12:02.445 "auth": { 00:12:02.445 "state": "completed", 00:12:02.445 "digest": "sha256", 00:12:02.445 "dhgroup": "ffdhe3072" 00:12:02.445 } 00:12:02.445 } 00:12:02.445 ]' 00:12:02.445 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.445 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.445 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.445 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:02.445 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.702 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.702 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.702 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.961 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:02.961 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:03.528 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.528 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:03.528 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.528 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.528 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.528 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.528 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:03.528 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:03.788 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:04.047 00:12:04.047 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.047 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.047 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.306 { 00:12:04.306 "cntlid": 23, 00:12:04.306 "qid": 0, 00:12:04.306 "state": "enabled", 00:12:04.306 "thread": "nvmf_tgt_poll_group_000", 00:12:04.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:04.306 "listen_address": { 00:12:04.306 "trtype": "TCP", 00:12:04.306 "adrfam": "IPv4", 00:12:04.306 "traddr": "10.0.0.3", 00:12:04.306 "trsvcid": "4420" 00:12:04.306 }, 00:12:04.306 "peer_address": { 00:12:04.306 "trtype": "TCP", 00:12:04.306 "adrfam": "IPv4", 00:12:04.306 "traddr": "10.0.0.1", 00:12:04.306 "trsvcid": "54750" 00:12:04.306 }, 00:12:04.306 "auth": { 00:12:04.306 "state": "completed", 00:12:04.306 "digest": "sha256", 00:12:04.306 "dhgroup": "ffdhe3072" 00:12:04.306 } 00:12:04.306 } 00:12:04.306 ]' 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:04.306 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.565 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:04.565 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.565 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.565 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.565 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.822 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:04.823 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.390 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.648 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.649 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.649 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.649 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.649 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.649 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:05.907 00:12:05.907 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.907 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.907 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.167 { 00:12:06.167 "cntlid": 25, 00:12:06.167 "qid": 0, 00:12:06.167 "state": "enabled", 00:12:06.167 "thread": "nvmf_tgt_poll_group_000", 00:12:06.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:06.167 "listen_address": { 00:12:06.167 "trtype": "TCP", 00:12:06.167 "adrfam": "IPv4", 00:12:06.167 "traddr": "10.0.0.3", 00:12:06.167 "trsvcid": "4420" 00:12:06.167 }, 00:12:06.167 "peer_address": { 00:12:06.167 "trtype": "TCP", 00:12:06.167 "adrfam": "IPv4", 00:12:06.167 "traddr": "10.0.0.1", 00:12:06.167 "trsvcid": "54768" 00:12:06.167 }, 00:12:06.167 "auth": { 00:12:06.167 "state": "completed", 00:12:06.167 "digest": "sha256", 00:12:06.167 "dhgroup": "ffdhe4096" 00:12:06.167 } 00:12:06.167 } 00:12:06.167 ]' 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:06.167 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.426 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:06.426 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.426 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.426 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.426 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.685 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:06.685 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.253 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:07.513 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:08.081 00:12:08.081 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.081 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.081 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:08.341 { 00:12:08.341 "cntlid": 27, 00:12:08.341 "qid": 0, 00:12:08.341 "state": "enabled", 00:12:08.341 "thread": "nvmf_tgt_poll_group_000", 00:12:08.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:08.341 "listen_address": { 00:12:08.341 "trtype": "TCP", 00:12:08.341 "adrfam": "IPv4", 00:12:08.341 "traddr": "10.0.0.3", 00:12:08.341 "trsvcid": "4420" 00:12:08.341 }, 00:12:08.341 "peer_address": { 00:12:08.341 "trtype": "TCP", 00:12:08.341 "adrfam": "IPv4", 00:12:08.341 "traddr": "10.0.0.1", 00:12:08.341 "trsvcid": "58864" 00:12:08.341 }, 00:12:08.341 "auth": { 00:12:08.341 "state": "completed", 00:12:08.341 "digest": "sha256", 00:12:08.341 "dhgroup": "ffdhe4096" 00:12:08.341 } 00:12:08.341 } 00:12:08.341 ]' 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.341 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.909 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:08.909 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:09.478 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:09.737 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:10.306 00:12:10.306 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.306 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.306 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.306 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.306 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.306 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.306 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.565 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.565 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.565 { 00:12:10.565 "cntlid": 29, 00:12:10.565 "qid": 0, 00:12:10.565 "state": "enabled", 00:12:10.565 "thread": "nvmf_tgt_poll_group_000", 00:12:10.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:10.565 "listen_address": { 00:12:10.565 "trtype": "TCP", 00:12:10.565 "adrfam": "IPv4", 00:12:10.565 "traddr": "10.0.0.3", 00:12:10.565 "trsvcid": "4420" 00:12:10.565 }, 00:12:10.565 "peer_address": { 00:12:10.565 "trtype": "TCP", 00:12:10.565 "adrfam": "IPv4", 00:12:10.565 "traddr": "10.0.0.1", 00:12:10.565 "trsvcid": "58896" 00:12:10.565 }, 00:12:10.565 "auth": { 00:12:10.565 "state": "completed", 00:12:10.565 "digest": "sha256", 00:12:10.565 "dhgroup": "ffdhe4096" 00:12:10.565 } 00:12:10.565 } 00:12:10.565 ]' 00:12:10.565 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.565 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.565 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.565 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:10.565 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:10.565 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.565 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.565 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.824 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:10.824 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:11.392 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.392 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:11.392 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.392 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.392 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.392 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.392 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:11.392 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:11.651 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:12.220 00:12:12.220 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.220 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.220 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:12.479 { 00:12:12.479 "cntlid": 31, 00:12:12.479 "qid": 0, 00:12:12.479 "state": "enabled", 00:12:12.479 "thread": "nvmf_tgt_poll_group_000", 00:12:12.479 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:12.479 "listen_address": { 00:12:12.479 "trtype": "TCP", 00:12:12.479 "adrfam": "IPv4", 00:12:12.479 "traddr": "10.0.0.3", 00:12:12.479 "trsvcid": "4420" 00:12:12.479 }, 00:12:12.479 "peer_address": { 00:12:12.479 "trtype": "TCP", 00:12:12.479 "adrfam": "IPv4", 00:12:12.479 "traddr": "10.0.0.1", 00:12:12.479 "trsvcid": "58926" 00:12:12.479 }, 00:12:12.479 "auth": { 00:12:12.479 "state": "completed", 00:12:12.479 "digest": "sha256", 00:12:12.479 "dhgroup": "ffdhe4096" 00:12:12.479 } 00:12:12.479 } 00:12:12.479 ]' 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.479 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:12.479 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:12.479 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.480 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.480 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.480 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.739 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:12.739 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:13.676 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:13.676 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:14.244 00:12:14.244 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.244 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.244 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.503 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.503 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.503 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.503 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.503 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.503 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.503 { 00:12:14.503 "cntlid": 33, 00:12:14.503 "qid": 0, 00:12:14.503 "state": "enabled", 00:12:14.504 "thread": "nvmf_tgt_poll_group_000", 00:12:14.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:14.504 "listen_address": { 00:12:14.504 "trtype": "TCP", 00:12:14.504 "adrfam": "IPv4", 00:12:14.504 "traddr": "10.0.0.3", 00:12:14.504 "trsvcid": "4420" 00:12:14.504 }, 00:12:14.504 "peer_address": { 00:12:14.504 "trtype": "TCP", 00:12:14.504 "adrfam": "IPv4", 00:12:14.504 "traddr": "10.0.0.1", 00:12:14.504 "trsvcid": "58964" 00:12:14.504 }, 00:12:14.504 "auth": { 00:12:14.504 "state": "completed", 00:12:14.504 "digest": "sha256", 00:12:14.504 "dhgroup": "ffdhe6144" 00:12:14.504 } 00:12:14.504 } 00:12:14.504 ]' 00:12:14.504 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.504 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.504 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.504 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.504 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.504 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.504 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.504 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.762 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:14.762 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.329 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.588 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.847 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.847 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.847 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:15.847 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:16.106 00:12:16.106 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.106 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.106 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.365 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.365 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.365 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.365 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.365 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.365 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:16.365 { 00:12:16.365 "cntlid": 35, 00:12:16.365 "qid": 0, 00:12:16.365 "state": "enabled", 00:12:16.365 "thread": "nvmf_tgt_poll_group_000", 00:12:16.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:16.365 "listen_address": { 00:12:16.365 "trtype": "TCP", 00:12:16.365 "adrfam": "IPv4", 00:12:16.365 "traddr": "10.0.0.3", 00:12:16.365 "trsvcid": "4420" 00:12:16.365 }, 00:12:16.365 "peer_address": { 00:12:16.365 "trtype": "TCP", 00:12:16.365 "adrfam": "IPv4", 00:12:16.365 "traddr": "10.0.0.1", 00:12:16.365 "trsvcid": "58990" 00:12:16.365 }, 00:12:16.365 "auth": { 00:12:16.365 "state": "completed", 00:12:16.365 "digest": "sha256", 00:12:16.365 "dhgroup": "ffdhe6144" 00:12:16.365 } 00:12:16.365 } 00:12:16.365 ]' 00:12:16.365 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:16.624 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:16.624 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:16.624 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:16.624 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:16.624 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.624 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.624 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.886 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:16.886 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:17.453 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:18.020 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.021 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.279 00:12:18.279 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:18.279 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.279 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.537 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.537 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.537 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.537 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:18.796 { 00:12:18.796 "cntlid": 37, 00:12:18.796 "qid": 0, 00:12:18.796 "state": "enabled", 00:12:18.796 "thread": "nvmf_tgt_poll_group_000", 00:12:18.796 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:18.796 "listen_address": { 00:12:18.796 "trtype": "TCP", 00:12:18.796 "adrfam": "IPv4", 00:12:18.796 "traddr": "10.0.0.3", 00:12:18.796 "trsvcid": "4420" 00:12:18.796 }, 00:12:18.796 "peer_address": { 00:12:18.796 "trtype": "TCP", 00:12:18.796 "adrfam": "IPv4", 00:12:18.796 "traddr": "10.0.0.1", 00:12:18.796 "trsvcid": "49752" 00:12:18.796 }, 00:12:18.796 "auth": { 00:12:18.796 "state": "completed", 00:12:18.796 "digest": "sha256", 00:12:18.796 "dhgroup": "ffdhe6144" 00:12:18.796 } 00:12:18.796 } 00:12:18.796 ]' 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.796 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.069 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:19.069 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:20.015 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.015 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:20.015 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.015 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.015 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.015 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.015 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.016 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.584 00:12:20.584 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:20.584 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:20.584 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.843 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.843 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.843 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.843 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.843 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.843 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:20.843 { 00:12:20.843 "cntlid": 39, 00:12:20.843 "qid": 0, 00:12:20.843 "state": "enabled", 00:12:20.843 "thread": "nvmf_tgt_poll_group_000", 00:12:20.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:20.843 "listen_address": { 00:12:20.843 "trtype": "TCP", 00:12:20.843 "adrfam": "IPv4", 00:12:20.843 "traddr": "10.0.0.3", 00:12:20.843 "trsvcid": "4420" 00:12:20.843 }, 00:12:20.843 "peer_address": { 00:12:20.843 "trtype": "TCP", 00:12:20.843 "adrfam": "IPv4", 00:12:20.843 "traddr": "10.0.0.1", 00:12:20.843 "trsvcid": "49768" 00:12:20.843 }, 00:12:20.843 "auth": { 00:12:20.843 "state": "completed", 00:12:20.843 "digest": "sha256", 00:12:20.843 "dhgroup": "ffdhe6144" 00:12:20.843 } 00:12:20.843 } 00:12:20.843 ]' 00:12:20.843 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.101 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.101 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.101 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:21.101 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.101 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.101 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.101 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.360 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:21.360 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.295 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.296 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:22.296 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.232 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.232 { 00:12:23.232 "cntlid": 41, 00:12:23.232 "qid": 0, 00:12:23.232 "state": "enabled", 00:12:23.232 "thread": "nvmf_tgt_poll_group_000", 00:12:23.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:23.232 "listen_address": { 00:12:23.232 "trtype": "TCP", 00:12:23.232 "adrfam": "IPv4", 00:12:23.232 "traddr": "10.0.0.3", 00:12:23.232 "trsvcid": "4420" 00:12:23.232 }, 00:12:23.232 "peer_address": { 00:12:23.232 "trtype": "TCP", 00:12:23.232 "adrfam": "IPv4", 00:12:23.232 "traddr": "10.0.0.1", 00:12:23.232 "trsvcid": "49796" 00:12:23.232 }, 00:12:23.232 "auth": { 00:12:23.232 "state": "completed", 00:12:23.232 "digest": "sha256", 00:12:23.232 "dhgroup": "ffdhe8192" 00:12:23.232 } 00:12:23.232 } 00:12:23.232 ]' 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.232 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:23.491 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.491 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:23.491 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.491 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.491 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.491 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.749 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:23.749 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.317 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:24.885 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:24.886 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.453 00:12:25.453 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.453 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.453 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.712 { 00:12:25.712 "cntlid": 43, 00:12:25.712 "qid": 0, 00:12:25.712 "state": "enabled", 00:12:25.712 "thread": "nvmf_tgt_poll_group_000", 00:12:25.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:25.712 "listen_address": { 00:12:25.712 "trtype": "TCP", 00:12:25.712 "adrfam": "IPv4", 00:12:25.712 "traddr": "10.0.0.3", 00:12:25.712 "trsvcid": "4420" 00:12:25.712 }, 00:12:25.712 "peer_address": { 00:12:25.712 "trtype": "TCP", 00:12:25.712 "adrfam": "IPv4", 00:12:25.712 "traddr": "10.0.0.1", 00:12:25.712 "trsvcid": "49826" 00:12:25.712 }, 00:12:25.712 "auth": { 00:12:25.712 "state": "completed", 00:12:25.712 "digest": "sha256", 00:12:25.712 "dhgroup": "ffdhe8192" 00:12:25.712 } 00:12:25.712 } 00:12:25.712 ]' 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.712 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.280 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:26.280 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:26.848 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.107 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.675 00:12:27.675 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.675 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.675 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.242 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.242 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.242 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.243 { 00:12:28.243 "cntlid": 45, 00:12:28.243 "qid": 0, 00:12:28.243 "state": "enabled", 00:12:28.243 "thread": "nvmf_tgt_poll_group_000", 00:12:28.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:28.243 "listen_address": { 00:12:28.243 "trtype": "TCP", 00:12:28.243 "adrfam": "IPv4", 00:12:28.243 "traddr": "10.0.0.3", 00:12:28.243 "trsvcid": "4420" 00:12:28.243 }, 00:12:28.243 "peer_address": { 00:12:28.243 "trtype": "TCP", 00:12:28.243 "adrfam": "IPv4", 00:12:28.243 "traddr": "10.0.0.1", 00:12:28.243 "trsvcid": "49050" 00:12:28.243 }, 00:12:28.243 "auth": { 00:12:28.243 "state": "completed", 00:12:28.243 "digest": "sha256", 00:12:28.243 "dhgroup": "ffdhe8192" 00:12:28.243 } 00:12:28.243 } 00:12:28.243 ]' 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.243 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.501 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:28.501 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:29.438 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.438 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.375 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.375 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.375 { 00:12:30.375 "cntlid": 47, 00:12:30.375 "qid": 0, 00:12:30.375 "state": "enabled", 00:12:30.375 "thread": "nvmf_tgt_poll_group_000", 00:12:30.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:30.375 "listen_address": { 00:12:30.375 "trtype": "TCP", 00:12:30.375 "adrfam": "IPv4", 00:12:30.376 "traddr": "10.0.0.3", 00:12:30.376 "trsvcid": "4420" 00:12:30.376 }, 00:12:30.376 "peer_address": { 00:12:30.376 "trtype": "TCP", 00:12:30.376 "adrfam": "IPv4", 00:12:30.376 "traddr": "10.0.0.1", 00:12:30.376 "trsvcid": "49082" 00:12:30.376 }, 00:12:30.376 "auth": { 00:12:30.376 "state": "completed", 00:12:30.376 "digest": "sha256", 00:12:30.376 "dhgroup": "ffdhe8192" 00:12:30.376 } 00:12:30.376 } 00:12:30.376 ]' 00:12:30.376 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.376 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.376 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.635 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:30.635 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.635 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.635 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.635 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.895 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:30.895 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:31.494 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:31.494 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.753 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.011 00:12:32.011 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.011 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.011 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.270 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.270 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.270 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.270 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.270 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.270 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.270 { 00:12:32.270 "cntlid": 49, 00:12:32.270 "qid": 0, 00:12:32.270 "state": "enabled", 00:12:32.270 "thread": "nvmf_tgt_poll_group_000", 00:12:32.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:32.270 "listen_address": { 00:12:32.270 "trtype": "TCP", 00:12:32.270 "adrfam": "IPv4", 00:12:32.270 "traddr": "10.0.0.3", 00:12:32.270 "trsvcid": "4420" 00:12:32.270 }, 00:12:32.270 "peer_address": { 00:12:32.270 "trtype": "TCP", 00:12:32.270 "adrfam": "IPv4", 00:12:32.270 "traddr": "10.0.0.1", 00:12:32.270 "trsvcid": "49112" 00:12:32.270 }, 00:12:32.270 "auth": { 00:12:32.270 "state": "completed", 00:12:32.270 "digest": "sha384", 00:12:32.270 "dhgroup": "null" 00:12:32.270 } 00:12:32.270 } 00:12:32.270 ]' 00:12:32.270 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.529 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.529 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.529 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:32.529 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.529 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.529 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.529 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.787 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:32.788 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.724 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.983 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.983 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.983 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.983 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.242 00:12:34.242 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.242 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.242 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:34.503 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.503 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.504 { 00:12:34.504 "cntlid": 51, 00:12:34.504 "qid": 0, 00:12:34.504 "state": "enabled", 00:12:34.504 "thread": "nvmf_tgt_poll_group_000", 00:12:34.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:34.504 "listen_address": { 00:12:34.504 "trtype": "TCP", 00:12:34.504 "adrfam": "IPv4", 00:12:34.504 "traddr": "10.0.0.3", 00:12:34.504 "trsvcid": "4420" 00:12:34.504 }, 00:12:34.504 "peer_address": { 00:12:34.504 "trtype": "TCP", 00:12:34.504 "adrfam": "IPv4", 00:12:34.504 "traddr": "10.0.0.1", 00:12:34.504 "trsvcid": "49136" 00:12:34.504 }, 00:12:34.504 "auth": { 00:12:34.504 "state": "completed", 00:12:34.504 "digest": "sha384", 00:12:34.504 "dhgroup": "null" 00:12:34.504 } 00:12:34.504 } 00:12:34.504 ]' 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:34.504 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.763 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.763 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.763 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.022 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:35.022 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.590 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.849 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.415 00:12:36.415 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.415 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.415 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.415 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.415 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.415 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.415 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.415 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.415 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.415 { 00:12:36.415 "cntlid": 53, 00:12:36.415 "qid": 0, 00:12:36.415 "state": "enabled", 00:12:36.415 "thread": "nvmf_tgt_poll_group_000", 00:12:36.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:36.415 "listen_address": { 00:12:36.415 "trtype": "TCP", 00:12:36.415 "adrfam": "IPv4", 00:12:36.415 "traddr": "10.0.0.3", 00:12:36.415 "trsvcid": "4420" 00:12:36.415 }, 00:12:36.415 "peer_address": { 00:12:36.415 "trtype": "TCP", 00:12:36.415 "adrfam": "IPv4", 00:12:36.415 "traddr": "10.0.0.1", 00:12:36.415 "trsvcid": "49158" 00:12:36.415 }, 00:12:36.415 "auth": { 00:12:36.415 "state": "completed", 00:12:36.415 "digest": "sha384", 00:12:36.415 "dhgroup": "null" 00:12:36.415 } 00:12:36.415 } 00:12:36.415 ]' 00:12:36.415 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.674 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.674 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.674 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:36.674 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.674 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.674 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.674 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.933 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:36.934 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:37.501 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.760 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.019 00:12:38.019 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.019 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.019 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.277 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.277 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.278 { 00:12:38.278 "cntlid": 55, 00:12:38.278 "qid": 0, 00:12:38.278 "state": "enabled", 00:12:38.278 "thread": "nvmf_tgt_poll_group_000", 00:12:38.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:38.278 "listen_address": { 00:12:38.278 "trtype": "TCP", 00:12:38.278 "adrfam": "IPv4", 00:12:38.278 "traddr": "10.0.0.3", 00:12:38.278 "trsvcid": "4420" 00:12:38.278 }, 00:12:38.278 "peer_address": { 00:12:38.278 "trtype": "TCP", 00:12:38.278 "adrfam": "IPv4", 00:12:38.278 "traddr": "10.0.0.1", 00:12:38.278 "trsvcid": "57678" 00:12:38.278 }, 00:12:38.278 "auth": { 00:12:38.278 "state": "completed", 00:12:38.278 "digest": "sha384", 00:12:38.278 "dhgroup": "null" 00:12:38.278 } 00:12:38.278 } 00:12:38.278 ]' 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:38.278 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.536 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.536 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.536 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.795 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:38.795 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.362 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.362 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:39.620 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.621 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.879 00:12:39.879 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.879 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.879 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.138 { 00:12:40.138 "cntlid": 57, 00:12:40.138 "qid": 0, 00:12:40.138 "state": "enabled", 00:12:40.138 "thread": "nvmf_tgt_poll_group_000", 00:12:40.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:40.138 "listen_address": { 00:12:40.138 "trtype": "TCP", 00:12:40.138 "adrfam": "IPv4", 00:12:40.138 "traddr": "10.0.0.3", 00:12:40.138 "trsvcid": "4420" 00:12:40.138 }, 00:12:40.138 "peer_address": { 00:12:40.138 "trtype": "TCP", 00:12:40.138 "adrfam": "IPv4", 00:12:40.138 "traddr": "10.0.0.1", 00:12:40.138 "trsvcid": "57706" 00:12:40.138 }, 00:12:40.138 "auth": { 00:12:40.138 "state": "completed", 00:12:40.138 "digest": "sha384", 00:12:40.138 "dhgroup": "ffdhe2048" 00:12:40.138 } 00:12:40.138 } 00:12:40.138 ]' 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.138 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.397 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:40.397 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:40.965 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.965 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:40.965 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.965 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.224 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.224 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.224 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.224 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.483 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.484 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:41.742 00:12:41.742 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.742 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.742 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.001 { 00:12:42.001 "cntlid": 59, 00:12:42.001 "qid": 0, 00:12:42.001 "state": "enabled", 00:12:42.001 "thread": "nvmf_tgt_poll_group_000", 00:12:42.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:42.001 "listen_address": { 00:12:42.001 "trtype": "TCP", 00:12:42.001 "adrfam": "IPv4", 00:12:42.001 "traddr": "10.0.0.3", 00:12:42.001 "trsvcid": "4420" 00:12:42.001 }, 00:12:42.001 "peer_address": { 00:12:42.001 "trtype": "TCP", 00:12:42.001 "adrfam": "IPv4", 00:12:42.001 "traddr": "10.0.0.1", 00:12:42.001 "trsvcid": "57734" 00:12:42.001 }, 00:12:42.001 "auth": { 00:12:42.001 "state": "completed", 00:12:42.001 "digest": "sha384", 00:12:42.001 "dhgroup": "ffdhe2048" 00:12:42.001 } 00:12:42.001 } 00:12:42.001 ]' 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.001 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.259 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:42.259 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:42.827 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.086 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:43.345 00:12:43.603 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.603 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.603 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.862 { 00:12:43.862 "cntlid": 61, 00:12:43.862 "qid": 0, 00:12:43.862 "state": "enabled", 00:12:43.862 "thread": "nvmf_tgt_poll_group_000", 00:12:43.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:43.862 "listen_address": { 00:12:43.862 "trtype": "TCP", 00:12:43.862 "adrfam": "IPv4", 00:12:43.862 "traddr": "10.0.0.3", 00:12:43.862 "trsvcid": "4420" 00:12:43.862 }, 00:12:43.862 "peer_address": { 00:12:43.862 "trtype": "TCP", 00:12:43.862 "adrfam": "IPv4", 00:12:43.862 "traddr": "10.0.0.1", 00:12:43.862 "trsvcid": "57768" 00:12:43.862 }, 00:12:43.862 "auth": { 00:12:43.862 "state": "completed", 00:12:43.862 "digest": "sha384", 00:12:43.862 "dhgroup": "ffdhe2048" 00:12:43.862 } 00:12:43.862 } 00:12:43.862 ]' 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.862 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.121 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:44.121 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:44.716 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:44.974 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:45.233 00:12:45.233 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.492 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.492 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.750 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.750 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.750 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.750 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.750 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.750 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.750 { 00:12:45.750 "cntlid": 63, 00:12:45.750 "qid": 0, 00:12:45.750 "state": "enabled", 00:12:45.750 "thread": "nvmf_tgt_poll_group_000", 00:12:45.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:45.750 "listen_address": { 00:12:45.750 "trtype": "TCP", 00:12:45.750 "adrfam": "IPv4", 00:12:45.750 "traddr": "10.0.0.3", 00:12:45.751 "trsvcid": "4420" 00:12:45.751 }, 00:12:45.751 "peer_address": { 00:12:45.751 "trtype": "TCP", 00:12:45.751 "adrfam": "IPv4", 00:12:45.751 "traddr": "10.0.0.1", 00:12:45.751 "trsvcid": "57794" 00:12:45.751 }, 00:12:45.751 "auth": { 00:12:45.751 "state": "completed", 00:12:45.751 "digest": "sha384", 00:12:45.751 "dhgroup": "ffdhe2048" 00:12:45.751 } 00:12:45.751 } 00:12:45.751 ]' 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.751 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.009 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:46.009 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:46.945 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.203 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:47.462 00:12:47.462 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.462 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.462 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.720 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.720 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.720 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.720 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.720 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.721 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.721 { 00:12:47.721 "cntlid": 65, 00:12:47.721 "qid": 0, 00:12:47.721 "state": "enabled", 00:12:47.721 "thread": "nvmf_tgt_poll_group_000", 00:12:47.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:47.721 "listen_address": { 00:12:47.721 "trtype": "TCP", 00:12:47.721 "adrfam": "IPv4", 00:12:47.721 "traddr": "10.0.0.3", 00:12:47.721 "trsvcid": "4420" 00:12:47.721 }, 00:12:47.721 "peer_address": { 00:12:47.721 "trtype": "TCP", 00:12:47.721 "adrfam": "IPv4", 00:12:47.721 "traddr": "10.0.0.1", 00:12:47.721 "trsvcid": "58174" 00:12:47.721 }, 00:12:47.721 "auth": { 00:12:47.721 "state": "completed", 00:12:47.721 "digest": "sha384", 00:12:47.721 "dhgroup": "ffdhe3072" 00:12:47.721 } 00:12:47.721 } 00:12:47.721 ]' 00:12:47.721 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.721 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.721 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.979 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.979 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.979 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.979 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.979 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.238 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:48.238 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:48.805 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:49.063 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:49.063 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.064 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:49.631 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.631 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.890 { 00:12:49.890 "cntlid": 67, 00:12:49.890 "qid": 0, 00:12:49.890 "state": "enabled", 00:12:49.890 "thread": "nvmf_tgt_poll_group_000", 00:12:49.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:49.890 "listen_address": { 00:12:49.890 "trtype": "TCP", 00:12:49.890 "adrfam": "IPv4", 00:12:49.890 "traddr": "10.0.0.3", 00:12:49.890 "trsvcid": "4420" 00:12:49.890 }, 00:12:49.890 "peer_address": { 00:12:49.890 "trtype": "TCP", 00:12:49.890 "adrfam": "IPv4", 00:12:49.890 "traddr": "10.0.0.1", 00:12:49.890 "trsvcid": "58184" 00:12:49.890 }, 00:12:49.890 "auth": { 00:12:49.890 "state": "completed", 00:12:49.890 "digest": "sha384", 00:12:49.890 "dhgroup": "ffdhe3072" 00:12:49.890 } 00:12:49.890 } 00:12:49.890 ]' 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.890 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.149 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:50.149 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.715 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:50.974 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:50.974 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.974 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:50.974 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:50.974 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:50.974 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.975 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.975 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.975 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.975 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.975 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.975 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.975 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:51.542 00:12:51.542 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.543 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.543 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.802 { 00:12:51.802 "cntlid": 69, 00:12:51.802 "qid": 0, 00:12:51.802 "state": "enabled", 00:12:51.802 "thread": "nvmf_tgt_poll_group_000", 00:12:51.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:51.802 "listen_address": { 00:12:51.802 "trtype": "TCP", 00:12:51.802 "adrfam": "IPv4", 00:12:51.802 "traddr": "10.0.0.3", 00:12:51.802 "trsvcid": "4420" 00:12:51.802 }, 00:12:51.802 "peer_address": { 00:12:51.802 "trtype": "TCP", 00:12:51.802 "adrfam": "IPv4", 00:12:51.802 "traddr": "10.0.0.1", 00:12:51.802 "trsvcid": "58206" 00:12:51.802 }, 00:12:51.802 "auth": { 00:12:51.802 "state": "completed", 00:12:51.802 "digest": "sha384", 00:12:51.802 "dhgroup": "ffdhe3072" 00:12:51.802 } 00:12:51.802 } 00:12:51.802 ]' 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.802 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.061 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:52.061 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:52.629 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:53.196 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:53.196 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.196 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:53.196 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:53.196 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:53.197 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:53.456 00:12:53.456 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.456 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.456 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.714 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.714 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.714 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.715 { 00:12:53.715 "cntlid": 71, 00:12:53.715 "qid": 0, 00:12:53.715 "state": "enabled", 00:12:53.715 "thread": "nvmf_tgt_poll_group_000", 00:12:53.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:53.715 "listen_address": { 00:12:53.715 "trtype": "TCP", 00:12:53.715 "adrfam": "IPv4", 00:12:53.715 "traddr": "10.0.0.3", 00:12:53.715 "trsvcid": "4420" 00:12:53.715 }, 00:12:53.715 "peer_address": { 00:12:53.715 "trtype": "TCP", 00:12:53.715 "adrfam": "IPv4", 00:12:53.715 "traddr": "10.0.0.1", 00:12:53.715 "trsvcid": "58222" 00:12:53.715 }, 00:12:53.715 "auth": { 00:12:53.715 "state": "completed", 00:12:53.715 "digest": "sha384", 00:12:53.715 "dhgroup": "ffdhe3072" 00:12:53.715 } 00:12:53.715 } 00:12:53.715 ]' 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.715 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.973 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:53.973 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.919 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.177 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.177 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.177 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.177 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.439 00:12:55.439 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.439 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.439 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.697 { 00:12:55.697 "cntlid": 73, 00:12:55.697 "qid": 0, 00:12:55.697 "state": "enabled", 00:12:55.697 "thread": "nvmf_tgt_poll_group_000", 00:12:55.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:55.697 "listen_address": { 00:12:55.697 "trtype": "TCP", 00:12:55.697 "adrfam": "IPv4", 00:12:55.697 "traddr": "10.0.0.3", 00:12:55.697 "trsvcid": "4420" 00:12:55.697 }, 00:12:55.697 "peer_address": { 00:12:55.697 "trtype": "TCP", 00:12:55.697 "adrfam": "IPv4", 00:12:55.697 "traddr": "10.0.0.1", 00:12:55.697 "trsvcid": "58254" 00:12:55.697 }, 00:12:55.697 "auth": { 00:12:55.697 "state": "completed", 00:12:55.697 "digest": "sha384", 00:12:55.697 "dhgroup": "ffdhe4096" 00:12:55.697 } 00:12:55.697 } 00:12:55.697 ]' 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:55.697 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.955 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:55.955 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.955 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.955 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.955 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.213 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:56.213 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.147 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.715 00:12:57.715 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.715 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.715 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.980 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.980 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.980 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.980 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.980 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.980 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.980 { 00:12:57.980 "cntlid": 75, 00:12:57.980 "qid": 0, 00:12:57.980 "state": "enabled", 00:12:57.980 "thread": "nvmf_tgt_poll_group_000", 00:12:57.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:12:57.980 "listen_address": { 00:12:57.980 "trtype": "TCP", 00:12:57.980 "adrfam": "IPv4", 00:12:57.980 "traddr": "10.0.0.3", 00:12:57.980 "trsvcid": "4420" 00:12:57.980 }, 00:12:57.980 "peer_address": { 00:12:57.980 "trtype": "TCP", 00:12:57.980 "adrfam": "IPv4", 00:12:57.980 "traddr": "10.0.0.1", 00:12:57.980 "trsvcid": "49634" 00:12:57.980 }, 00:12:57.980 "auth": { 00:12:57.980 "state": "completed", 00:12:57.980 "digest": "sha384", 00:12:57.980 "dhgroup": "ffdhe4096" 00:12:57.980 } 00:12:57.980 } 00:12:57.980 ]' 00:12:57.980 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.238 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:58.238 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.238 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.238 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.238 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.238 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.238 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.497 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:58.497 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.064 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.632 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:59.891 00:12:59.891 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.891 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.891 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.150 { 00:13:00.150 "cntlid": 77, 00:13:00.150 "qid": 0, 00:13:00.150 "state": "enabled", 00:13:00.150 "thread": "nvmf_tgt_poll_group_000", 00:13:00.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:00.150 "listen_address": { 00:13:00.150 "trtype": "TCP", 00:13:00.150 "adrfam": "IPv4", 00:13:00.150 "traddr": "10.0.0.3", 00:13:00.150 "trsvcid": "4420" 00:13:00.150 }, 00:13:00.150 "peer_address": { 00:13:00.150 "trtype": "TCP", 00:13:00.150 "adrfam": "IPv4", 00:13:00.150 "traddr": "10.0.0.1", 00:13:00.150 "trsvcid": "49660" 00:13:00.150 }, 00:13:00.150 "auth": { 00:13:00.150 "state": "completed", 00:13:00.150 "digest": "sha384", 00:13:00.150 "dhgroup": "ffdhe4096" 00:13:00.150 } 00:13:00.150 } 00:13:00.150 ]' 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.150 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:00.718 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:00.718 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.286 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.545 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.545 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:01.545 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.545 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:01.803 00:13:01.804 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.804 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.804 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.062 { 00:13:02.062 "cntlid": 79, 00:13:02.062 "qid": 0, 00:13:02.062 "state": "enabled", 00:13:02.062 "thread": "nvmf_tgt_poll_group_000", 00:13:02.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:02.062 "listen_address": { 00:13:02.062 "trtype": "TCP", 00:13:02.062 "adrfam": "IPv4", 00:13:02.062 "traddr": "10.0.0.3", 00:13:02.062 "trsvcid": "4420" 00:13:02.062 }, 00:13:02.062 "peer_address": { 00:13:02.062 "trtype": "TCP", 00:13:02.062 "adrfam": "IPv4", 00:13:02.062 "traddr": "10.0.0.1", 00:13:02.062 "trsvcid": "49676" 00:13:02.062 }, 00:13:02.062 "auth": { 00:13:02.062 "state": "completed", 00:13:02.062 "digest": "sha384", 00:13:02.062 "dhgroup": "ffdhe4096" 00:13:02.062 } 00:13:02.062 } 00:13:02.062 ]' 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:02.062 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.321 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.321 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.321 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.580 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:02.580 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:03.147 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.406 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:03.974 00:13:03.974 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.974 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.974 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.234 { 00:13:04.234 "cntlid": 81, 00:13:04.234 "qid": 0, 00:13:04.234 "state": "enabled", 00:13:04.234 "thread": "nvmf_tgt_poll_group_000", 00:13:04.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:04.234 "listen_address": { 00:13:04.234 "trtype": "TCP", 00:13:04.234 "adrfam": "IPv4", 00:13:04.234 "traddr": "10.0.0.3", 00:13:04.234 "trsvcid": "4420" 00:13:04.234 }, 00:13:04.234 "peer_address": { 00:13:04.234 "trtype": "TCP", 00:13:04.234 "adrfam": "IPv4", 00:13:04.234 "traddr": "10.0.0.1", 00:13:04.234 "trsvcid": "49706" 00:13:04.234 }, 00:13:04.234 "auth": { 00:13:04.234 "state": "completed", 00:13:04.234 "digest": "sha384", 00:13:04.234 "dhgroup": "ffdhe6144" 00:13:04.234 } 00:13:04.234 } 00:13:04.234 ]' 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.234 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.493 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:04.493 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.060 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:05.318 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:05.318 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.318 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:05.318 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:05.318 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.318 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.319 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.319 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.319 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.577 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.577 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.577 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.577 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.840 00:13:05.840 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.840 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.840 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.105 { 00:13:06.105 "cntlid": 83, 00:13:06.105 "qid": 0, 00:13:06.105 "state": "enabled", 00:13:06.105 "thread": "nvmf_tgt_poll_group_000", 00:13:06.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:06.105 "listen_address": { 00:13:06.105 "trtype": "TCP", 00:13:06.105 "adrfam": "IPv4", 00:13:06.105 "traddr": "10.0.0.3", 00:13:06.105 "trsvcid": "4420" 00:13:06.105 }, 00:13:06.105 "peer_address": { 00:13:06.105 "trtype": "TCP", 00:13:06.105 "adrfam": "IPv4", 00:13:06.105 "traddr": "10.0.0.1", 00:13:06.105 "trsvcid": "49742" 00:13:06.105 }, 00:13:06.105 "auth": { 00:13:06.105 "state": "completed", 00:13:06.105 "digest": "sha384", 00:13:06.105 "dhgroup": "ffdhe6144" 00:13:06.105 } 00:13:06.105 } 00:13:06.105 ]' 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.105 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.363 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:06.363 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.363 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.363 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.363 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.623 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:06.623 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.190 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.452 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:08.021 00:13:08.021 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.021 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.021 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.281 { 00:13:08.281 "cntlid": 85, 00:13:08.281 "qid": 0, 00:13:08.281 "state": "enabled", 00:13:08.281 "thread": "nvmf_tgt_poll_group_000", 00:13:08.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:08.281 "listen_address": { 00:13:08.281 "trtype": "TCP", 00:13:08.281 "adrfam": "IPv4", 00:13:08.281 "traddr": "10.0.0.3", 00:13:08.281 "trsvcid": "4420" 00:13:08.281 }, 00:13:08.281 "peer_address": { 00:13:08.281 "trtype": "TCP", 00:13:08.281 "adrfam": "IPv4", 00:13:08.281 "traddr": "10.0.0.1", 00:13:08.281 "trsvcid": "42188" 00:13:08.281 }, 00:13:08.281 "auth": { 00:13:08.281 "state": "completed", 00:13:08.281 "digest": "sha384", 00:13:08.281 "dhgroup": "ffdhe6144" 00:13:08.281 } 00:13:08.281 } 00:13:08.281 ]' 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.281 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.848 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:08.848 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:09.415 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.415 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:09.415 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.416 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.416 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.416 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.416 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.416 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.675 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.934 00:13:09.934 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.934 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.934 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.193 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.193 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.193 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.193 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.452 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.452 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.452 { 00:13:10.452 "cntlid": 87, 00:13:10.452 "qid": 0, 00:13:10.453 "state": "enabled", 00:13:10.453 "thread": "nvmf_tgt_poll_group_000", 00:13:10.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:10.453 "listen_address": { 00:13:10.453 "trtype": "TCP", 00:13:10.453 "adrfam": "IPv4", 00:13:10.453 "traddr": "10.0.0.3", 00:13:10.453 "trsvcid": "4420" 00:13:10.453 }, 00:13:10.453 "peer_address": { 00:13:10.453 "trtype": "TCP", 00:13:10.453 "adrfam": "IPv4", 00:13:10.453 "traddr": "10.0.0.1", 00:13:10.453 "trsvcid": "42210" 00:13:10.453 }, 00:13:10.453 "auth": { 00:13:10.453 "state": "completed", 00:13:10.453 "digest": "sha384", 00:13:10.453 "dhgroup": "ffdhe6144" 00:13:10.453 } 00:13:10.453 } 00:13:10.453 ]' 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.453 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.711 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:10.711 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:11.646 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.646 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.584 00:13:12.584 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.584 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.584 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.842 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.842 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.842 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.842 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.842 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.842 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.842 { 00:13:12.842 "cntlid": 89, 00:13:12.842 "qid": 0, 00:13:12.842 "state": "enabled", 00:13:12.842 "thread": "nvmf_tgt_poll_group_000", 00:13:12.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:12.842 "listen_address": { 00:13:12.842 "trtype": "TCP", 00:13:12.842 "adrfam": "IPv4", 00:13:12.842 "traddr": "10.0.0.3", 00:13:12.842 "trsvcid": "4420" 00:13:12.842 }, 00:13:12.842 "peer_address": { 00:13:12.842 "trtype": "TCP", 00:13:12.842 "adrfam": "IPv4", 00:13:12.842 "traddr": "10.0.0.1", 00:13:12.842 "trsvcid": "42246" 00:13:12.842 }, 00:13:12.842 "auth": { 00:13:12.842 "state": "completed", 00:13:12.842 "digest": "sha384", 00:13:12.842 "dhgroup": "ffdhe8192" 00:13:12.842 } 00:13:12.842 } 00:13:12.842 ]' 00:13:12.842 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.843 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.843 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.843 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.843 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.843 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.843 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.843 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.101 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:13.101 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.038 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.614 00:13:14.614 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.614 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.614 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.877 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.877 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.877 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.877 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.877 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.877 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.877 { 00:13:14.877 "cntlid": 91, 00:13:14.877 "qid": 0, 00:13:14.877 "state": "enabled", 00:13:14.877 "thread": "nvmf_tgt_poll_group_000", 00:13:14.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:14.877 "listen_address": { 00:13:14.877 "trtype": "TCP", 00:13:14.877 "adrfam": "IPv4", 00:13:14.877 "traddr": "10.0.0.3", 00:13:14.877 "trsvcid": "4420" 00:13:14.877 }, 00:13:14.877 "peer_address": { 00:13:14.877 "trtype": "TCP", 00:13:14.877 "adrfam": "IPv4", 00:13:14.877 "traddr": "10.0.0.1", 00:13:14.877 "trsvcid": "42268" 00:13:14.877 }, 00:13:14.877 "auth": { 00:13:14.877 "state": "completed", 00:13:14.877 "digest": "sha384", 00:13:14.877 "dhgroup": "ffdhe8192" 00:13:14.877 } 00:13:14.877 } 00:13:14.877 ]' 00:13:14.877 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.136 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.136 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.136 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:15.136 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.136 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.136 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.136 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.394 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:15.395 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.331 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.643 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.643 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.643 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.643 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.210 00:13:17.210 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.210 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.210 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.469 { 00:13:17.469 "cntlid": 93, 00:13:17.469 "qid": 0, 00:13:17.469 "state": "enabled", 00:13:17.469 "thread": "nvmf_tgt_poll_group_000", 00:13:17.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:17.469 "listen_address": { 00:13:17.469 "trtype": "TCP", 00:13:17.469 "adrfam": "IPv4", 00:13:17.469 "traddr": "10.0.0.3", 00:13:17.469 "trsvcid": "4420" 00:13:17.469 }, 00:13:17.469 "peer_address": { 00:13:17.469 "trtype": "TCP", 00:13:17.469 "adrfam": "IPv4", 00:13:17.469 "traddr": "10.0.0.1", 00:13:17.469 "trsvcid": "53984" 00:13:17.469 }, 00:13:17.469 "auth": { 00:13:17.469 "state": "completed", 00:13:17.469 "digest": "sha384", 00:13:17.469 "dhgroup": "ffdhe8192" 00:13:17.469 } 00:13:17.469 } 00:13:17.469 ]' 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.469 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.469 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.469 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.469 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.469 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.469 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.037 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:18.037 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:18.605 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.606 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:18.606 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.606 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.606 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.606 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.606 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:18.606 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:18.865 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.432 00:13:19.432 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.432 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.432 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.691 { 00:13:19.691 "cntlid": 95, 00:13:19.691 "qid": 0, 00:13:19.691 "state": "enabled", 00:13:19.691 "thread": "nvmf_tgt_poll_group_000", 00:13:19.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:19.691 "listen_address": { 00:13:19.691 "trtype": "TCP", 00:13:19.691 "adrfam": "IPv4", 00:13:19.691 "traddr": "10.0.0.3", 00:13:19.691 "trsvcid": "4420" 00:13:19.691 }, 00:13:19.691 "peer_address": { 00:13:19.691 "trtype": "TCP", 00:13:19.691 "adrfam": "IPv4", 00:13:19.691 "traddr": "10.0.0.1", 00:13:19.691 "trsvcid": "54010" 00:13:19.691 }, 00:13:19.691 "auth": { 00:13:19.691 "state": "completed", 00:13:19.691 "digest": "sha384", 00:13:19.691 "dhgroup": "ffdhe8192" 00:13:19.691 } 00:13:19.691 } 00:13:19.691 ]' 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.691 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.950 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.950 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.950 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.950 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.950 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.209 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:20.209 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:20.777 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.344 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.603 00:13:21.603 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.603 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.603 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.862 { 00:13:21.862 "cntlid": 97, 00:13:21.862 "qid": 0, 00:13:21.862 "state": "enabled", 00:13:21.862 "thread": "nvmf_tgt_poll_group_000", 00:13:21.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:21.862 "listen_address": { 00:13:21.862 "trtype": "TCP", 00:13:21.862 "adrfam": "IPv4", 00:13:21.862 "traddr": "10.0.0.3", 00:13:21.862 "trsvcid": "4420" 00:13:21.862 }, 00:13:21.862 "peer_address": { 00:13:21.862 "trtype": "TCP", 00:13:21.862 "adrfam": "IPv4", 00:13:21.862 "traddr": "10.0.0.1", 00:13:21.862 "trsvcid": "54052" 00:13:21.862 }, 00:13:21.862 "auth": { 00:13:21.862 "state": "completed", 00:13:21.862 "digest": "sha512", 00:13:21.862 "dhgroup": "null" 00:13:21.862 } 00:13:21.862 } 00:13:21.862 ]' 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:21.862 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.120 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.120 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.121 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.379 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:22.379 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:22.947 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.947 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:22.947 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.947 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.206 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.206 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.206 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:23.206 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.465 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.724 00:13:23.724 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.724 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.724 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.983 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.983 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.983 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.983 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.983 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.983 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:23.984 { 00:13:23.984 "cntlid": 99, 00:13:23.984 "qid": 0, 00:13:23.984 "state": "enabled", 00:13:23.984 "thread": "nvmf_tgt_poll_group_000", 00:13:23.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:23.984 "listen_address": { 00:13:23.984 "trtype": "TCP", 00:13:23.984 "adrfam": "IPv4", 00:13:23.984 "traddr": "10.0.0.3", 00:13:23.984 "trsvcid": "4420" 00:13:23.984 }, 00:13:23.984 "peer_address": { 00:13:23.984 "trtype": "TCP", 00:13:23.984 "adrfam": "IPv4", 00:13:23.984 "traddr": "10.0.0.1", 00:13:23.984 "trsvcid": "54072" 00:13:23.984 }, 00:13:23.984 "auth": { 00:13:23.984 "state": "completed", 00:13:23.984 "digest": "sha512", 00:13:23.984 "dhgroup": "null" 00:13:23.984 } 00:13:23.984 } 00:13:23.984 ]' 00:13:23.984 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:23.984 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.984 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.243 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:24.243 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.243 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.243 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.243 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.501 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:24.502 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:25.068 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.069 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:25.069 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.069 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.069 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.069 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.069 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:25.069 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.636 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.636 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.636 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.636 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.636 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.895 00:13:25.895 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.895 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.895 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.154 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.154 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.155 { 00:13:26.155 "cntlid": 101, 00:13:26.155 "qid": 0, 00:13:26.155 "state": "enabled", 00:13:26.155 "thread": "nvmf_tgt_poll_group_000", 00:13:26.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:26.155 "listen_address": { 00:13:26.155 "trtype": "TCP", 00:13:26.155 "adrfam": "IPv4", 00:13:26.155 "traddr": "10.0.0.3", 00:13:26.155 "trsvcid": "4420" 00:13:26.155 }, 00:13:26.155 "peer_address": { 00:13:26.155 "trtype": "TCP", 00:13:26.155 "adrfam": "IPv4", 00:13:26.155 "traddr": "10.0.0.1", 00:13:26.155 "trsvcid": "54118" 00:13:26.155 }, 00:13:26.155 "auth": { 00:13:26.155 "state": "completed", 00:13:26.155 "digest": "sha512", 00:13:26.155 "dhgroup": "null" 00:13:26.155 } 00:13:26.155 } 00:13:26.155 ]' 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.155 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.414 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:26.414 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:27.350 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.609 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.868 00:13:27.868 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.868 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.868 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.127 { 00:13:28.127 "cntlid": 103, 00:13:28.127 "qid": 0, 00:13:28.127 "state": "enabled", 00:13:28.127 "thread": "nvmf_tgt_poll_group_000", 00:13:28.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:28.127 "listen_address": { 00:13:28.127 "trtype": "TCP", 00:13:28.127 "adrfam": "IPv4", 00:13:28.127 "traddr": "10.0.0.3", 00:13:28.127 "trsvcid": "4420" 00:13:28.127 }, 00:13:28.127 "peer_address": { 00:13:28.127 "trtype": "TCP", 00:13:28.127 "adrfam": "IPv4", 00:13:28.127 "traddr": "10.0.0.1", 00:13:28.127 "trsvcid": "55462" 00:13:28.127 }, 00:13:28.127 "auth": { 00:13:28.127 "state": "completed", 00:13:28.127 "digest": "sha512", 00:13:28.127 "dhgroup": "null" 00:13:28.127 } 00:13:28.127 } 00:13:28.127 ]' 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.127 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.385 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:28.385 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:28.952 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.520 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.779 00:13:29.779 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.779 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.779 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.037 { 00:13:30.037 "cntlid": 105, 00:13:30.037 "qid": 0, 00:13:30.037 "state": "enabled", 00:13:30.037 "thread": "nvmf_tgt_poll_group_000", 00:13:30.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:30.037 "listen_address": { 00:13:30.037 "trtype": "TCP", 00:13:30.037 "adrfam": "IPv4", 00:13:30.037 "traddr": "10.0.0.3", 00:13:30.037 "trsvcid": "4420" 00:13:30.037 }, 00:13:30.037 "peer_address": { 00:13:30.037 "trtype": "TCP", 00:13:30.037 "adrfam": "IPv4", 00:13:30.037 "traddr": "10.0.0.1", 00:13:30.037 "trsvcid": "55488" 00:13:30.037 }, 00:13:30.037 "auth": { 00:13:30.037 "state": "completed", 00:13:30.037 "digest": "sha512", 00:13:30.037 "dhgroup": "ffdhe2048" 00:13:30.037 } 00:13:30.037 } 00:13:30.037 ]' 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.037 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.038 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.038 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.038 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.297 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:30.297 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.233 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.493 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.751 00:13:31.751 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.751 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.751 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.010 { 00:13:32.010 "cntlid": 107, 00:13:32.010 "qid": 0, 00:13:32.010 "state": "enabled", 00:13:32.010 "thread": "nvmf_tgt_poll_group_000", 00:13:32.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:32.010 "listen_address": { 00:13:32.010 "trtype": "TCP", 00:13:32.010 "adrfam": "IPv4", 00:13:32.010 "traddr": "10.0.0.3", 00:13:32.010 "trsvcid": "4420" 00:13:32.010 }, 00:13:32.010 "peer_address": { 00:13:32.010 "trtype": "TCP", 00:13:32.010 "adrfam": "IPv4", 00:13:32.010 "traddr": "10.0.0.1", 00:13:32.010 "trsvcid": "55526" 00:13:32.010 }, 00:13:32.010 "auth": { 00:13:32.010 "state": "completed", 00:13:32.010 "digest": "sha512", 00:13:32.010 "dhgroup": "ffdhe2048" 00:13:32.010 } 00:13:32.010 } 00:13:32.010 ]' 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:32.010 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.269 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.269 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.269 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.528 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:32.528 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.096 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.355 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.614 00:13:33.614 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.614 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.614 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.182 { 00:13:34.182 "cntlid": 109, 00:13:34.182 "qid": 0, 00:13:34.182 "state": "enabled", 00:13:34.182 "thread": "nvmf_tgt_poll_group_000", 00:13:34.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:34.182 "listen_address": { 00:13:34.182 "trtype": "TCP", 00:13:34.182 "adrfam": "IPv4", 00:13:34.182 "traddr": "10.0.0.3", 00:13:34.182 "trsvcid": "4420" 00:13:34.182 }, 00:13:34.182 "peer_address": { 00:13:34.182 "trtype": "TCP", 00:13:34.182 "adrfam": "IPv4", 00:13:34.182 "traddr": "10.0.0.1", 00:13:34.182 "trsvcid": "55552" 00:13:34.182 }, 00:13:34.182 "auth": { 00:13:34.182 "state": "completed", 00:13:34.182 "digest": "sha512", 00:13:34.182 "dhgroup": "ffdhe2048" 00:13:34.182 } 00:13:34.182 } 00:13:34.182 ]' 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.182 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.441 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:34.441 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:35.008 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.267 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.525 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.784 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.784 { 00:13:35.784 "cntlid": 111, 00:13:35.784 "qid": 0, 00:13:35.784 "state": "enabled", 00:13:35.784 "thread": "nvmf_tgt_poll_group_000", 00:13:35.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:35.784 "listen_address": { 00:13:35.784 "trtype": "TCP", 00:13:35.784 "adrfam": "IPv4", 00:13:35.784 "traddr": "10.0.0.3", 00:13:35.784 "trsvcid": "4420" 00:13:35.784 }, 00:13:35.784 "peer_address": { 00:13:35.784 "trtype": "TCP", 00:13:35.784 "adrfam": "IPv4", 00:13:35.784 "traddr": "10.0.0.1", 00:13:35.784 "trsvcid": "55586" 00:13:35.784 }, 00:13:35.784 "auth": { 00:13:35.785 "state": "completed", 00:13:35.785 "digest": "sha512", 00:13:35.785 "dhgroup": "ffdhe2048" 00:13:35.785 } 00:13:35.785 } 00:13:35.785 ]' 00:13:35.785 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.044 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.044 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.044 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:36.044 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.044 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.044 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.044 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.303 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:36.303 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:36.871 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.130 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.389 00:13:37.389 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.389 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.389 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.648 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.648 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.648 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.648 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.907 { 00:13:37.907 "cntlid": 113, 00:13:37.907 "qid": 0, 00:13:37.907 "state": "enabled", 00:13:37.907 "thread": "nvmf_tgt_poll_group_000", 00:13:37.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:37.907 "listen_address": { 00:13:37.907 "trtype": "TCP", 00:13:37.907 "adrfam": "IPv4", 00:13:37.907 "traddr": "10.0.0.3", 00:13:37.907 "trsvcid": "4420" 00:13:37.907 }, 00:13:37.907 "peer_address": { 00:13:37.907 "trtype": "TCP", 00:13:37.907 "adrfam": "IPv4", 00:13:37.907 "traddr": "10.0.0.1", 00:13:37.907 "trsvcid": "35706" 00:13:37.907 }, 00:13:37.907 "auth": { 00:13:37.907 "state": "completed", 00:13:37.907 "digest": "sha512", 00:13:37.907 "dhgroup": "ffdhe3072" 00:13:37.907 } 00:13:37.907 } 00:13:37.907 ]' 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.907 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.908 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.178 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:38.178 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.139 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.707 00:13:39.707 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.707 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.707 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.707 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.707 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.707 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.707 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.966 { 00:13:39.966 "cntlid": 115, 00:13:39.966 "qid": 0, 00:13:39.966 "state": "enabled", 00:13:39.966 "thread": "nvmf_tgt_poll_group_000", 00:13:39.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:39.966 "listen_address": { 00:13:39.966 "trtype": "TCP", 00:13:39.966 "adrfam": "IPv4", 00:13:39.966 "traddr": "10.0.0.3", 00:13:39.966 "trsvcid": "4420" 00:13:39.966 }, 00:13:39.966 "peer_address": { 00:13:39.966 "trtype": "TCP", 00:13:39.966 "adrfam": "IPv4", 00:13:39.966 "traddr": "10.0.0.1", 00:13:39.966 "trsvcid": "35730" 00:13:39.966 }, 00:13:39.966 "auth": { 00:13:39.966 "state": "completed", 00:13:39.966 "digest": "sha512", 00:13:39.966 "dhgroup": "ffdhe3072" 00:13:39.966 } 00:13:39.966 } 00:13:39.966 ]' 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.966 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.225 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:40.225 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:40.793 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.052 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.620 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.620 { 00:13:41.620 "cntlid": 117, 00:13:41.620 "qid": 0, 00:13:41.620 "state": "enabled", 00:13:41.620 "thread": "nvmf_tgt_poll_group_000", 00:13:41.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:41.620 "listen_address": { 00:13:41.620 "trtype": "TCP", 00:13:41.620 "adrfam": "IPv4", 00:13:41.620 "traddr": "10.0.0.3", 00:13:41.620 "trsvcid": "4420" 00:13:41.620 }, 00:13:41.620 "peer_address": { 00:13:41.620 "trtype": "TCP", 00:13:41.620 "adrfam": "IPv4", 00:13:41.620 "traddr": "10.0.0.1", 00:13:41.620 "trsvcid": "35740" 00:13:41.620 }, 00:13:41.620 "auth": { 00:13:41.620 "state": "completed", 00:13:41.620 "digest": "sha512", 00:13:41.620 "dhgroup": "ffdhe3072" 00:13:41.620 } 00:13:41.620 } 00:13:41.620 ]' 00:13:41.620 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.879 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.879 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.879 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.879 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.879 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.879 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.879 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.138 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:42.138 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:42.706 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.706 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:42.706 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.706 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.706 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.707 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.707 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.707 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.965 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.532 00:13:43.532 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.532 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.532 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.533 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.533 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.533 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.533 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.533 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.533 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.533 { 00:13:43.533 "cntlid": 119, 00:13:43.533 "qid": 0, 00:13:43.533 "state": "enabled", 00:13:43.533 "thread": "nvmf_tgt_poll_group_000", 00:13:43.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:43.533 "listen_address": { 00:13:43.533 "trtype": "TCP", 00:13:43.533 "adrfam": "IPv4", 00:13:43.533 "traddr": "10.0.0.3", 00:13:43.533 "trsvcid": "4420" 00:13:43.533 }, 00:13:43.533 "peer_address": { 00:13:43.533 "trtype": "TCP", 00:13:43.533 "adrfam": "IPv4", 00:13:43.533 "traddr": "10.0.0.1", 00:13:43.533 "trsvcid": "35768" 00:13:43.533 }, 00:13:43.533 "auth": { 00:13:43.533 "state": "completed", 00:13:43.533 "digest": "sha512", 00:13:43.533 "dhgroup": "ffdhe3072" 00:13:43.533 } 00:13:43.533 } 00:13:43.533 ]' 00:13:43.533 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.799 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.799 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.799 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.799 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.799 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.799 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.799 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.059 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:44.059 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.997 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.255 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.255 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.255 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.255 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.514 00:13:45.514 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.514 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.514 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.079 { 00:13:46.079 "cntlid": 121, 00:13:46.079 "qid": 0, 00:13:46.079 "state": "enabled", 00:13:46.079 "thread": "nvmf_tgt_poll_group_000", 00:13:46.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:46.079 "listen_address": { 00:13:46.079 "trtype": "TCP", 00:13:46.079 "adrfam": "IPv4", 00:13:46.079 "traddr": "10.0.0.3", 00:13:46.079 "trsvcid": "4420" 00:13:46.079 }, 00:13:46.079 "peer_address": { 00:13:46.079 "trtype": "TCP", 00:13:46.079 "adrfam": "IPv4", 00:13:46.079 "traddr": "10.0.0.1", 00:13:46.079 "trsvcid": "35796" 00:13:46.079 }, 00:13:46.079 "auth": { 00:13:46.079 "state": "completed", 00:13:46.079 "digest": "sha512", 00:13:46.079 "dhgroup": "ffdhe4096" 00:13:46.079 } 00:13:46.079 } 00:13:46.079 ]' 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.079 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.336 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:46.336 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:46.902 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.160 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:47.160 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.160 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.160 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.160 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.160 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.160 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.419 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.688 00:13:47.688 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.688 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.688 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.975 { 00:13:47.975 "cntlid": 123, 00:13:47.975 "qid": 0, 00:13:47.975 "state": "enabled", 00:13:47.975 "thread": "nvmf_tgt_poll_group_000", 00:13:47.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:47.975 "listen_address": { 00:13:47.975 "trtype": "TCP", 00:13:47.975 "adrfam": "IPv4", 00:13:47.975 "traddr": "10.0.0.3", 00:13:47.975 "trsvcid": "4420" 00:13:47.975 }, 00:13:47.975 "peer_address": { 00:13:47.975 "trtype": "TCP", 00:13:47.975 "adrfam": "IPv4", 00:13:47.975 "traddr": "10.0.0.1", 00:13:47.975 "trsvcid": "53196" 00:13:47.975 }, 00:13:47.975 "auth": { 00:13:47.975 "state": "completed", 00:13:47.975 "digest": "sha512", 00:13:47.975 "dhgroup": "ffdhe4096" 00:13:47.975 } 00:13:47.975 } 00:13:47.975 ]' 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:47.975 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.233 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.233 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.233 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.233 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.233 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.492 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:48.492 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:49.059 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.059 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:49.059 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.059 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.059 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.318 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.318 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.318 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.577 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:49.577 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.577 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.577 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.577 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.578 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:49.836 00:13:49.836 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.836 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.836 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.095 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.095 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.095 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.095 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.355 { 00:13:50.355 "cntlid": 125, 00:13:50.355 "qid": 0, 00:13:50.355 "state": "enabled", 00:13:50.355 "thread": "nvmf_tgt_poll_group_000", 00:13:50.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:50.355 "listen_address": { 00:13:50.355 "trtype": "TCP", 00:13:50.355 "adrfam": "IPv4", 00:13:50.355 "traddr": "10.0.0.3", 00:13:50.355 "trsvcid": "4420" 00:13:50.355 }, 00:13:50.355 "peer_address": { 00:13:50.355 "trtype": "TCP", 00:13:50.355 "adrfam": "IPv4", 00:13:50.355 "traddr": "10.0.0.1", 00:13:50.355 "trsvcid": "53218" 00:13:50.355 }, 00:13:50.355 "auth": { 00:13:50.355 "state": "completed", 00:13:50.355 "digest": "sha512", 00:13:50.355 "dhgroup": "ffdhe4096" 00:13:50.355 } 00:13:50.355 } 00:13:50.355 ]' 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.355 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.614 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:50.614 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.552 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.812 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.071 00:13:52.071 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.071 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.071 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.330 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.330 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.330 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.330 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.330 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.330 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.330 { 00:13:52.330 "cntlid": 127, 00:13:52.330 "qid": 0, 00:13:52.330 "state": "enabled", 00:13:52.330 "thread": "nvmf_tgt_poll_group_000", 00:13:52.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:52.330 "listen_address": { 00:13:52.330 "trtype": "TCP", 00:13:52.330 "adrfam": "IPv4", 00:13:52.330 "traddr": "10.0.0.3", 00:13:52.330 "trsvcid": "4420" 00:13:52.330 }, 00:13:52.330 "peer_address": { 00:13:52.330 "trtype": "TCP", 00:13:52.330 "adrfam": "IPv4", 00:13:52.330 "traddr": "10.0.0.1", 00:13:52.330 "trsvcid": "53242" 00:13:52.330 }, 00:13:52.330 "auth": { 00:13:52.330 "state": "completed", 00:13:52.330 "digest": "sha512", 00:13:52.330 "dhgroup": "ffdhe4096" 00:13:52.330 } 00:13:52.330 } 00:13:52.330 ]' 00:13:52.330 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.589 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:52.589 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.589 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.589 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.589 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.589 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.589 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.849 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:52.849 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:53.787 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.046 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.306 00:13:54.306 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.306 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.306 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.874 { 00:13:54.874 "cntlid": 129, 00:13:54.874 "qid": 0, 00:13:54.874 "state": "enabled", 00:13:54.874 "thread": "nvmf_tgt_poll_group_000", 00:13:54.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:54.874 "listen_address": { 00:13:54.874 "trtype": "TCP", 00:13:54.874 "adrfam": "IPv4", 00:13:54.874 "traddr": "10.0.0.3", 00:13:54.874 "trsvcid": "4420" 00:13:54.874 }, 00:13:54.874 "peer_address": { 00:13:54.874 "trtype": "TCP", 00:13:54.874 "adrfam": "IPv4", 00:13:54.874 "traddr": "10.0.0.1", 00:13:54.874 "trsvcid": "53260" 00:13:54.874 }, 00:13:54.874 "auth": { 00:13:54.874 "state": "completed", 00:13:54.874 "digest": "sha512", 00:13:54.874 "dhgroup": "ffdhe6144" 00:13:54.874 } 00:13:54.874 } 00:13:54.874 ]' 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.874 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.875 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.875 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.875 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.134 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:55.134 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.072 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.331 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:56.590 00:13:56.849 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.849 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.849 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.109 { 00:13:57.109 "cntlid": 131, 00:13:57.109 "qid": 0, 00:13:57.109 "state": "enabled", 00:13:57.109 "thread": "nvmf_tgt_poll_group_000", 00:13:57.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:57.109 "listen_address": { 00:13:57.109 "trtype": "TCP", 00:13:57.109 "adrfam": "IPv4", 00:13:57.109 "traddr": "10.0.0.3", 00:13:57.109 "trsvcid": "4420" 00:13:57.109 }, 00:13:57.109 "peer_address": { 00:13:57.109 "trtype": "TCP", 00:13:57.109 "adrfam": "IPv4", 00:13:57.109 "traddr": "10.0.0.1", 00:13:57.109 "trsvcid": "55116" 00:13:57.109 }, 00:13:57.109 "auth": { 00:13:57.109 "state": "completed", 00:13:57.109 "digest": "sha512", 00:13:57.109 "dhgroup": "ffdhe6144" 00:13:57.109 } 00:13:57.109 } 00:13:57.109 ]' 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.109 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.414 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:57.414 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:13:57.982 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.983 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:13:57.983 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.983 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.242 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:58.811 00:13:58.811 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.811 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.811 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.070 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.070 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.070 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.070 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.070 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.070 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.070 { 00:13:59.070 "cntlid": 133, 00:13:59.070 "qid": 0, 00:13:59.070 "state": "enabled", 00:13:59.070 "thread": "nvmf_tgt_poll_group_000", 00:13:59.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:13:59.070 "listen_address": { 00:13:59.070 "trtype": "TCP", 00:13:59.070 "adrfam": "IPv4", 00:13:59.070 "traddr": "10.0.0.3", 00:13:59.070 "trsvcid": "4420" 00:13:59.070 }, 00:13:59.070 "peer_address": { 00:13:59.070 "trtype": "TCP", 00:13:59.070 "adrfam": "IPv4", 00:13:59.070 "traddr": "10.0.0.1", 00:13:59.070 "trsvcid": "55136" 00:13:59.070 }, 00:13:59.070 "auth": { 00:13:59.070 "state": "completed", 00:13:59.070 "digest": "sha512", 00:13:59.070 "dhgroup": "ffdhe6144" 00:13:59.070 } 00:13:59.070 } 00:13:59.070 ]' 00:13:59.070 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.329 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.329 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.329 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.329 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.329 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.329 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.330 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.589 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:13:59.589 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:14:00.156 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.416 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:00.416 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.416 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.416 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.416 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.416 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:00.416 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:00.675 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.243 00:14:01.243 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.243 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.243 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.502 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.502 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.502 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.502 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.502 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.502 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.502 { 00:14:01.502 "cntlid": 135, 00:14:01.502 "qid": 0, 00:14:01.502 "state": "enabled", 00:14:01.502 "thread": "nvmf_tgt_poll_group_000", 00:14:01.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:01.502 "listen_address": { 00:14:01.502 "trtype": "TCP", 00:14:01.502 "adrfam": "IPv4", 00:14:01.502 "traddr": "10.0.0.3", 00:14:01.502 "trsvcid": "4420" 00:14:01.502 }, 00:14:01.502 "peer_address": { 00:14:01.502 "trtype": "TCP", 00:14:01.502 "adrfam": "IPv4", 00:14:01.502 "traddr": "10.0.0.1", 00:14:01.502 "trsvcid": "55174" 00:14:01.502 }, 00:14:01.502 "auth": { 00:14:01.502 "state": "completed", 00:14:01.502 "digest": "sha512", 00:14:01.502 "dhgroup": "ffdhe6144" 00:14:01.502 } 00:14:01.502 } 00:14:01.502 ]' 00:14:01.502 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.502 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.502 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.502 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:01.502 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.502 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.502 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.502 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.761 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:01.761 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:02.329 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.329 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:02.329 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.329 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.587 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.587 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:02.587 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.588 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.588 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:02.846 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.411 00:14:03.412 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.412 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.412 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.670 { 00:14:03.670 "cntlid": 137, 00:14:03.670 "qid": 0, 00:14:03.670 "state": "enabled", 00:14:03.670 "thread": "nvmf_tgt_poll_group_000", 00:14:03.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:03.670 "listen_address": { 00:14:03.670 "trtype": "TCP", 00:14:03.670 "adrfam": "IPv4", 00:14:03.670 "traddr": "10.0.0.3", 00:14:03.670 "trsvcid": "4420" 00:14:03.670 }, 00:14:03.670 "peer_address": { 00:14:03.670 "trtype": "TCP", 00:14:03.670 "adrfam": "IPv4", 00:14:03.670 "traddr": "10.0.0.1", 00:14:03.670 "trsvcid": "55206" 00:14:03.670 }, 00:14:03.670 "auth": { 00:14:03.670 "state": "completed", 00:14:03.670 "digest": "sha512", 00:14:03.670 "dhgroup": "ffdhe8192" 00:14:03.670 } 00:14:03.670 } 00:14:03.670 ]' 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.670 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.929 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:03.929 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.929 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.929 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.929 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.188 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:14:04.188 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:04.755 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:05.013 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:05.013 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.013 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.013 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:05.013 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.013 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.014 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.014 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.014 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.014 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.014 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.014 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.014 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.581 00:14:05.581 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.581 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.581 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.149 { 00:14:06.149 "cntlid": 139, 00:14:06.149 "qid": 0, 00:14:06.149 "state": "enabled", 00:14:06.149 "thread": "nvmf_tgt_poll_group_000", 00:14:06.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:06.149 "listen_address": { 00:14:06.149 "trtype": "TCP", 00:14:06.149 "adrfam": "IPv4", 00:14:06.149 "traddr": "10.0.0.3", 00:14:06.149 "trsvcid": "4420" 00:14:06.149 }, 00:14:06.149 "peer_address": { 00:14:06.149 "trtype": "TCP", 00:14:06.149 "adrfam": "IPv4", 00:14:06.149 "traddr": "10.0.0.1", 00:14:06.149 "trsvcid": "55230" 00:14:06.149 }, 00:14:06.149 "auth": { 00:14:06.149 "state": "completed", 00:14:06.149 "digest": "sha512", 00:14:06.149 "dhgroup": "ffdhe8192" 00:14:06.149 } 00:14:06.149 } 00:14:06.149 ]' 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.149 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.408 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:14:06.408 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: --dhchap-ctrl-secret DHHC-1:02:NzNlZjQxNWRlNzE4NzU0YjhhYWI1ZTIzNzIwNDE5YWM4OWMxMGJjNDcyYzk2Zjg5O23Lmg==: 00:14:06.993 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.993 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.993 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:06.993 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.993 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.251 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.251 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.251 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.251 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.509 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:07.509 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.509 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.509 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:07.509 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.509 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.509 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.510 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.510 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.510 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.510 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.510 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.510 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.077 00:14:08.077 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.077 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.077 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.335 { 00:14:08.335 "cntlid": 141, 00:14:08.335 "qid": 0, 00:14:08.335 "state": "enabled", 00:14:08.335 "thread": "nvmf_tgt_poll_group_000", 00:14:08.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:08.335 "listen_address": { 00:14:08.335 "trtype": "TCP", 00:14:08.335 "adrfam": "IPv4", 00:14:08.335 "traddr": "10.0.0.3", 00:14:08.335 "trsvcid": "4420" 00:14:08.335 }, 00:14:08.335 "peer_address": { 00:14:08.335 "trtype": "TCP", 00:14:08.335 "adrfam": "IPv4", 00:14:08.335 "traddr": "10.0.0.1", 00:14:08.335 "trsvcid": "39710" 00:14:08.335 }, 00:14:08.335 "auth": { 00:14:08.335 "state": "completed", 00:14:08.335 "digest": "sha512", 00:14:08.335 "dhgroup": "ffdhe8192" 00:14:08.335 } 00:14:08.335 } 00:14:08.335 ]' 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.335 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.594 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:08.594 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.594 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.594 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.594 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.852 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:14:08.852 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:01:NDk0YzU0MzhlM2ZlZDRkNGIyMjgyNGQ0ZWY0MjY2OTSGTj9o: 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:09.787 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.046 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.613 00:14:10.613 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.613 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.613 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.181 { 00:14:11.181 "cntlid": 143, 00:14:11.181 "qid": 0, 00:14:11.181 "state": "enabled", 00:14:11.181 "thread": "nvmf_tgt_poll_group_000", 00:14:11.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:11.181 "listen_address": { 00:14:11.181 "trtype": "TCP", 00:14:11.181 "adrfam": "IPv4", 00:14:11.181 "traddr": "10.0.0.3", 00:14:11.181 "trsvcid": "4420" 00:14:11.181 }, 00:14:11.181 "peer_address": { 00:14:11.181 "trtype": "TCP", 00:14:11.181 "adrfam": "IPv4", 00:14:11.181 "traddr": "10.0.0.1", 00:14:11.181 "trsvcid": "39734" 00:14:11.181 }, 00:14:11.181 "auth": { 00:14:11.181 "state": "completed", 00:14:11.181 "digest": "sha512", 00:14:11.181 "dhgroup": "ffdhe8192" 00:14:11.181 } 00:14:11.181 } 00:14:11.181 ]' 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.181 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.440 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:11.440 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.376 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.634 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.570 00:14:13.570 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.570 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.570 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.828 { 00:14:13.828 "cntlid": 145, 00:14:13.828 "qid": 0, 00:14:13.828 "state": "enabled", 00:14:13.828 "thread": "nvmf_tgt_poll_group_000", 00:14:13.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:13.828 "listen_address": { 00:14:13.828 "trtype": "TCP", 00:14:13.828 "adrfam": "IPv4", 00:14:13.828 "traddr": "10.0.0.3", 00:14:13.828 "trsvcid": "4420" 00:14:13.828 }, 00:14:13.828 "peer_address": { 00:14:13.828 "trtype": "TCP", 00:14:13.828 "adrfam": "IPv4", 00:14:13.828 "traddr": "10.0.0.1", 00:14:13.828 "trsvcid": "39764" 00:14:13.828 }, 00:14:13.828 "auth": { 00:14:13.828 "state": "completed", 00:14:13.828 "digest": "sha512", 00:14:13.828 "dhgroup": "ffdhe8192" 00:14:13.828 } 00:14:13.828 } 00:14:13.828 ]' 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.828 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.395 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:14:14.395 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:00:MGNiZTBiMGVhZjU2ZDUwMDUyOWU0NzU4YWU5MmQ2NmQxMTQyMzgyN2I3YzE3NTYzUzIs6g==: --dhchap-ctrl-secret DHHC-1:03:NGI3OTA0MTI1ZDYzMGU0YjVjY2U1YjUzZmYzY2ViOTZhN2RhYjkzZjFiM2E5MzNlM2Y1ZDgwYmY3MGE3MDFlMh8kde0=: 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.962 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:14.963 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:14.963 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:15.530 request: 00:14:15.530 { 00:14:15.530 "name": "nvme0", 00:14:15.530 "trtype": "tcp", 00:14:15.530 "traddr": "10.0.0.3", 00:14:15.530 "adrfam": "ipv4", 00:14:15.530 "trsvcid": "4420", 00:14:15.530 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:15.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:15.530 "prchk_reftag": false, 00:14:15.530 "prchk_guard": false, 00:14:15.530 "hdgst": false, 00:14:15.530 "ddgst": false, 00:14:15.530 "dhchap_key": "key2", 00:14:15.530 "allow_unrecognized_csi": false, 00:14:15.530 "method": "bdev_nvme_attach_controller", 00:14:15.530 "req_id": 1 00:14:15.530 } 00:14:15.530 Got JSON-RPC error response 00:14:15.530 response: 00:14:15.530 { 00:14:15.530 "code": -5, 00:14:15.530 "message": "Input/output error" 00:14:15.530 } 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:15.530 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:16.466 request: 00:14:16.466 { 00:14:16.466 "name": "nvme0", 00:14:16.466 "trtype": "tcp", 00:14:16.466 "traddr": "10.0.0.3", 00:14:16.466 "adrfam": "ipv4", 00:14:16.466 "trsvcid": "4420", 00:14:16.466 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:16.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:16.466 "prchk_reftag": false, 00:14:16.466 "prchk_guard": false, 00:14:16.466 "hdgst": false, 00:14:16.466 "ddgst": false, 00:14:16.466 "dhchap_key": "key1", 00:14:16.466 "dhchap_ctrlr_key": "ckey2", 00:14:16.466 "allow_unrecognized_csi": false, 00:14:16.466 "method": "bdev_nvme_attach_controller", 00:14:16.466 "req_id": 1 00:14:16.466 } 00:14:16.466 Got JSON-RPC error response 00:14:16.466 response: 00:14:16.466 { 00:14:16.466 "code": -5, 00:14:16.466 "message": "Input/output error" 00:14:16.466 } 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.466 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.039 request: 00:14:17.039 { 00:14:17.039 "name": "nvme0", 00:14:17.039 "trtype": "tcp", 00:14:17.039 "traddr": "10.0.0.3", 00:14:17.039 "adrfam": "ipv4", 00:14:17.039 "trsvcid": "4420", 00:14:17.039 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:17.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:17.039 "prchk_reftag": false, 00:14:17.039 "prchk_guard": false, 00:14:17.039 "hdgst": false, 00:14:17.039 "ddgst": false, 00:14:17.039 "dhchap_key": "key1", 00:14:17.039 "dhchap_ctrlr_key": "ckey1", 00:14:17.039 "allow_unrecognized_csi": false, 00:14:17.039 "method": "bdev_nvme_attach_controller", 00:14:17.039 "req_id": 1 00:14:17.039 } 00:14:17.039 Got JSON-RPC error response 00:14:17.039 response: 00:14:17.039 { 00:14:17.039 "code": -5, 00:14:17.039 "message": "Input/output error" 00:14:17.039 } 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 81871 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81871 ']' 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81871 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81871 00:14:17.039 killing process with pid 81871 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81871' 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81871 00:14:17.039 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81871 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=84924 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 84924 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 84924 ']' 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.317 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 84924 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 84924 ']' 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.582 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.841 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.841 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:17.841 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:17.841 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.841 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 null0 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cbW 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.6ae ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6ae 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VaE 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.vJR ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vJR 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5sH 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Rdf ]] 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Rdf 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.C0o 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:18.101 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.037 nvme0n1 00:14:19.037 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.037 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.037 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.604 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.604 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.605 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.605 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.605 { 00:14:19.605 "cntlid": 1, 00:14:19.605 "qid": 0, 00:14:19.605 "state": "enabled", 00:14:19.605 "thread": "nvmf_tgt_poll_group_000", 00:14:19.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:19.605 "listen_address": { 00:14:19.605 "trtype": "TCP", 00:14:19.605 "adrfam": "IPv4", 00:14:19.605 "traddr": "10.0.0.3", 00:14:19.605 "trsvcid": "4420" 00:14:19.605 }, 00:14:19.605 "peer_address": { 00:14:19.605 "trtype": "TCP", 00:14:19.605 "adrfam": "IPv4", 00:14:19.605 "traddr": "10.0.0.1", 00:14:19.605 "trsvcid": "44596" 00:14:19.605 }, 00:14:19.605 "auth": { 00:14:19.605 "state": "completed", 00:14:19.605 "digest": "sha512", 00:14:19.605 "dhgroup": "ffdhe8192" 00:14:19.605 } 00:14:19.605 } 00:14:19.605 ]' 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.605 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.864 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:19.864 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key3 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:20.801 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.059 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.317 request: 00:14:21.317 { 00:14:21.317 "name": "nvme0", 00:14:21.317 "trtype": "tcp", 00:14:21.317 "traddr": "10.0.0.3", 00:14:21.317 "adrfam": "ipv4", 00:14:21.317 "trsvcid": "4420", 00:14:21.317 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:21.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:21.317 "prchk_reftag": false, 00:14:21.317 "prchk_guard": false, 00:14:21.317 "hdgst": false, 00:14:21.317 "ddgst": false, 00:14:21.317 "dhchap_key": "key3", 00:14:21.317 "allow_unrecognized_csi": false, 00:14:21.317 "method": "bdev_nvme_attach_controller", 00:14:21.317 "req_id": 1 00:14:21.317 } 00:14:21.317 Got JSON-RPC error response 00:14:21.317 response: 00:14:21.317 { 00:14:21.317 "code": -5, 00:14:21.317 "message": "Input/output error" 00:14:21.317 } 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:21.317 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.884 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:22.143 request: 00:14:22.143 { 00:14:22.143 "name": "nvme0", 00:14:22.143 "trtype": "tcp", 00:14:22.143 "traddr": "10.0.0.3", 00:14:22.143 "adrfam": "ipv4", 00:14:22.143 "trsvcid": "4420", 00:14:22.143 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:22.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:22.143 "prchk_reftag": false, 00:14:22.143 "prchk_guard": false, 00:14:22.143 "hdgst": false, 00:14:22.143 "ddgst": false, 00:14:22.143 "dhchap_key": "key3", 00:14:22.143 "allow_unrecognized_csi": false, 00:14:22.143 "method": "bdev_nvme_attach_controller", 00:14:22.143 "req_id": 1 00:14:22.143 } 00:14:22.143 Got JSON-RPC error response 00:14:22.143 response: 00:14:22.143 { 00:14:22.143 "code": -5, 00:14:22.143 "message": "Input/output error" 00:14:22.143 } 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.143 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.402 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:22.971 request: 00:14:22.971 { 00:14:22.971 "name": "nvme0", 00:14:22.971 "trtype": "tcp", 00:14:22.971 "traddr": "10.0.0.3", 00:14:22.971 "adrfam": "ipv4", 00:14:22.971 "trsvcid": "4420", 00:14:22.971 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:22.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:22.971 "prchk_reftag": false, 00:14:22.971 "prchk_guard": false, 00:14:22.971 "hdgst": false, 00:14:22.971 "ddgst": false, 00:14:22.971 "dhchap_key": "key0", 00:14:22.971 "dhchap_ctrlr_key": "key1", 00:14:22.971 "allow_unrecognized_csi": false, 00:14:22.971 "method": "bdev_nvme_attach_controller", 00:14:22.971 "req_id": 1 00:14:22.971 } 00:14:22.971 Got JSON-RPC error response 00:14:22.971 response: 00:14:22.971 { 00:14:22.971 "code": -5, 00:14:22.971 "message": "Input/output error" 00:14:22.971 } 00:14:22.971 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:22.971 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:22.971 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:22.971 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:22.971 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:22.971 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:22.971 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:23.539 nvme0n1 00:14:23.539 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:23.539 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:23.539 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.539 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.539 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.539 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.798 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 00:14:23.799 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.799 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.799 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.799 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:23.799 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:23.799 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:25.176 nvme0n1 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:25.176 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.436 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.436 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:25.436 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid febd874a-f7ac-4dde-b5e1-60c80814d053 -l 0 --dhchap-secret DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: --dhchap-ctrl-secret DHHC-1:03:NTc1Mjc1Zjg3MjZiNDNhYWQxZTBmNzRhNTljMTE0YTQ3NThlZDcyMmI0MDE5ZjllMDYyZWI0ODAxMmMzNzkzMKYKaXM=: 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.004 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:26.573 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:27.142 request: 00:14:27.142 { 00:14:27.142 "name": "nvme0", 00:14:27.142 "trtype": "tcp", 00:14:27.142 "traddr": "10.0.0.3", 00:14:27.142 "adrfam": "ipv4", 00:14:27.142 "trsvcid": "4420", 00:14:27.142 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:27.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053", 00:14:27.142 "prchk_reftag": false, 00:14:27.142 "prchk_guard": false, 00:14:27.142 "hdgst": false, 00:14:27.142 "ddgst": false, 00:14:27.142 "dhchap_key": "key1", 00:14:27.142 "allow_unrecognized_csi": false, 00:14:27.142 "method": "bdev_nvme_attach_controller", 00:14:27.142 "req_id": 1 00:14:27.142 } 00:14:27.142 Got JSON-RPC error response 00:14:27.142 response: 00:14:27.142 { 00:14:27.142 "code": -5, 00:14:27.142 "message": "Input/output error" 00:14:27.142 } 00:14:27.142 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:27.142 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.142 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.142 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.142 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:27.142 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:27.142 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:28.109 nvme0n1 00:14:28.109 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:28.109 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:28.109 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.369 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.369 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.369 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.628 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:28.628 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.628 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.628 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.628 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:28.628 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:28.628 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:28.888 nvme0n1 00:14:28.888 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:28.888 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:28.888 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.147 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.147 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.147 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: '' 2s 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: ]] 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjM5NjUxMmM3NjNjMGQxOGQ1YTI5OWQzZTRhMjc2M2Hmh1cj: 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:29.407 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:31.943 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:31.943 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: 2s 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: ]] 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmIyNDAxYTY5OTgzN2Y4NjY0MWEwNTljMjUzZDUyOGJiYzM4ODRkMDI3ZTYxNmEwrwxmXQ==: 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:31.943 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:33.850 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:33.850 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:33.850 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:33.850 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:33.851 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:34.789 nvme0n1 00:14:34.789 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.789 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.789 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.789 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.789 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.789 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:35.356 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:35.356 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.356 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:35.615 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.615 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:35.615 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.615 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.615 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.615 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:35.615 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:35.874 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:35.874 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:35.874 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.133 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:36.699 request: 00:14:36.699 { 00:14:36.699 "name": "nvme0", 00:14:36.699 "dhchap_key": "key1", 00:14:36.699 "dhchap_ctrlr_key": "key3", 00:14:36.699 "method": "bdev_nvme_set_keys", 00:14:36.699 "req_id": 1 00:14:36.699 } 00:14:36.699 Got JSON-RPC error response 00:14:36.699 response: 00:14:36.700 { 00:14:36.700 "code": -13, 00:14:36.700 "message": "Permission denied" 00:14:36.700 } 00:14:36.700 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:36.700 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:36.700 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:36.700 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:36.700 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:36.700 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.700 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:37.268 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:37.268 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:38.230 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:38.230 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:38.230 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:38.492 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:39.871 nvme0n1 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:39.871 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:40.440 request: 00:14:40.440 { 00:14:40.440 "name": "nvme0", 00:14:40.440 "dhchap_key": "key2", 00:14:40.440 "dhchap_ctrlr_key": "key0", 00:14:40.440 "method": "bdev_nvme_set_keys", 00:14:40.440 "req_id": 1 00:14:40.440 } 00:14:40.440 Got JSON-RPC error response 00:14:40.440 response: 00:14:40.440 { 00:14:40.440 "code": -13, 00:14:40.440 "message": "Permission denied" 00:14:40.440 } 00:14:40.440 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:40.440 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:40.440 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:40.440 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:40.440 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:40.440 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:40.440 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.700 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:40.700 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:41.636 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:41.636 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:41.636 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 81890 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81890 ']' 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81890 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81890 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:42.205 killing process with pid 81890 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81890' 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81890 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81890 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.205 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.464 rmmod nvme_tcp 00:14:42.464 rmmod nvme_fabrics 00:14:42.464 rmmod nvme_keyring 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 84924 ']' 00:14:42.464 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 84924 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 84924 ']' 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 84924 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84924 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.465 killing process with pid 84924 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84924' 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 84924 00:14:42.465 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 84924 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:42.724 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cbW /tmp/spdk.key-sha256.VaE /tmp/spdk.key-sha384.5sH /tmp/spdk.key-sha512.C0o /tmp/spdk.key-sha512.6ae /tmp/spdk.key-sha384.vJR /tmp/spdk.key-sha256.Rdf '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:42.984 00:14:42.984 real 3m8.786s 00:14:42.984 user 7m33.124s 00:14:42.984 sys 0m28.570s 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.984 ************************************ 00:14:42.984 END TEST nvmf_auth_target 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.984 ************************************ 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.984 ************************************ 00:14:42.984 START TEST nvmf_bdevio_no_huge 00:14:42.984 ************************************ 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:42.984 * Looking for test storage... 00:14:42.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:42.984 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:42.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.985 --rc genhtml_branch_coverage=1 00:14:42.985 --rc genhtml_function_coverage=1 00:14:42.985 --rc genhtml_legend=1 00:14:42.985 --rc geninfo_all_blocks=1 00:14:42.985 --rc geninfo_unexecuted_blocks=1 00:14:42.985 00:14:42.985 ' 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:42.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.985 --rc genhtml_branch_coverage=1 00:14:42.985 --rc genhtml_function_coverage=1 00:14:42.985 --rc genhtml_legend=1 00:14:42.985 --rc geninfo_all_blocks=1 00:14:42.985 --rc geninfo_unexecuted_blocks=1 00:14:42.985 00:14:42.985 ' 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:42.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.985 --rc genhtml_branch_coverage=1 00:14:42.985 --rc genhtml_function_coverage=1 00:14:42.985 --rc genhtml_legend=1 00:14:42.985 --rc geninfo_all_blocks=1 00:14:42.985 --rc geninfo_unexecuted_blocks=1 00:14:42.985 00:14:42.985 ' 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:42.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.985 --rc genhtml_branch_coverage=1 00:14:42.985 --rc genhtml_function_coverage=1 00:14:42.985 --rc genhtml_legend=1 00:14:42.985 --rc geninfo_all_blocks=1 00:14:42.985 --rc geninfo_unexecuted_blocks=1 00:14:42.985 00:14:42.985 ' 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:42.985 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:43.244 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:43.244 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:43.244 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:43.244 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:43.245 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:43.245 Cannot find device "nvmf_init_br" 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:43.245 Cannot find device "nvmf_init_br2" 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:43.245 Cannot find device "nvmf_tgt_br" 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:43.245 Cannot find device "nvmf_tgt_br2" 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:43.245 Cannot find device "nvmf_init_br" 00:14:43.245 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:43.246 Cannot find device "nvmf_init_br2" 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:43.246 Cannot find device "nvmf_tgt_br" 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:43.246 Cannot find device "nvmf_tgt_br2" 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:43.246 Cannot find device "nvmf_br" 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:43.246 Cannot find device "nvmf_init_if" 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:43.246 Cannot find device "nvmf_init_if2" 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:43.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:43.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:43.246 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:43.505 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:43.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:43.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:43.505 00:14:43.505 --- 10.0.0.3 ping statistics --- 00:14:43.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.505 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:43.505 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:43.506 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:43.506 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:14:43.506 00:14:43.506 --- 10.0.0.4 ping statistics --- 00:14:43.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.506 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:43.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:43.506 00:14:43.506 --- 10.0.0.1 ping statistics --- 00:14:43.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.506 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:43.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:14:43.506 00:14:43.506 --- 10.0.0.2 ping statistics --- 00:14:43.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.506 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=85570 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 85570 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 85570 ']' 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:43.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:43.506 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:43.506 [2024-12-16 01:36:14.140299] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:43.506 [2024-12-16 01:36:14.140428] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:43.766 [2024-12-16 01:36:14.306908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.766 [2024-12-16 01:36:14.363311] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.766 [2024-12-16 01:36:14.363382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.766 [2024-12-16 01:36:14.363408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:43.766 [2024-12-16 01:36:14.363417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:43.766 [2024-12-16 01:36:14.363426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.766 [2024-12-16 01:36:14.364033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:14:43.766 [2024-12-16 01:36:14.364622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:14:43.766 [2024-12-16 01:36:14.364740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:14:43.766 [2024-12-16 01:36:14.364748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.766 [2024-12-16 01:36:14.370442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.703 [2024-12-16 01:36:15.187551] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.703 Malloc0 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:44.703 [2024-12-16 01:36:15.227715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:44.703 { 00:14:44.703 "params": { 00:14:44.703 "name": "Nvme$subsystem", 00:14:44.703 "trtype": "$TEST_TRANSPORT", 00:14:44.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:44.703 "adrfam": "ipv4", 00:14:44.703 "trsvcid": "$NVMF_PORT", 00:14:44.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:44.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:44.703 "hdgst": ${hdgst:-false}, 00:14:44.703 "ddgst": ${ddgst:-false} 00:14:44.703 }, 00:14:44.703 "method": "bdev_nvme_attach_controller" 00:14:44.703 } 00:14:44.703 EOF 00:14:44.703 )") 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:44.703 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:44.703 "params": { 00:14:44.703 "name": "Nvme1", 00:14:44.703 "trtype": "tcp", 00:14:44.703 "traddr": "10.0.0.3", 00:14:44.703 "adrfam": "ipv4", 00:14:44.703 "trsvcid": "4420", 00:14:44.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.703 "hdgst": false, 00:14:44.703 "ddgst": false 00:14:44.703 }, 00:14:44.703 "method": "bdev_nvme_attach_controller" 00:14:44.703 }' 00:14:44.703 [2024-12-16 01:36:15.292837] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:44.703 [2024-12-16 01:36:15.292935] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid85612 ] 00:14:44.961 [2024-12-16 01:36:15.450179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:44.961 [2024-12-16 01:36:15.508591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.961 [2024-12-16 01:36:15.508717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.961 [2024-12-16 01:36:15.508723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.961 [2024-12-16 01:36:15.523774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.220 I/O targets: 00:14:45.220 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:45.220 00:14:45.220 00:14:45.220 CUnit - A unit testing framework for C - Version 2.1-3 00:14:45.220 http://cunit.sourceforge.net/ 00:14:45.220 00:14:45.220 00:14:45.220 Suite: bdevio tests on: Nvme1n1 00:14:45.220 Test: blockdev write read block ...passed 00:14:45.220 Test: blockdev write zeroes read block ...passed 00:14:45.220 Test: blockdev write zeroes read no split ...passed 00:14:45.220 Test: blockdev write zeroes read split ...passed 00:14:45.220 Test: blockdev write zeroes read split partial ...passed 00:14:45.220 Test: blockdev reset ...[2024-12-16 01:36:15.761917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:45.220 [2024-12-16 01:36:15.762047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c51e0 (9): Bad file descriptor 00:14:45.220 [2024-12-16 01:36:15.778965] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:45.220 passed 00:14:45.220 Test: blockdev write read 8 blocks ...passed 00:14:45.220 Test: blockdev write read size > 128k ...passed 00:14:45.220 Test: blockdev write read invalid size ...passed 00:14:45.220 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:45.220 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:45.220 Test: blockdev write read max offset ...passed 00:14:45.220 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:45.220 Test: blockdev writev readv 8 blocks ...passed 00:14:45.220 Test: blockdev writev readv 30 x 1block ...passed 00:14:45.220 Test: blockdev writev readv block ...passed 00:14:45.220 Test: blockdev writev readv size > 128k ...passed 00:14:45.220 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:45.220 Test: blockdev comparev and writev ...[2024-12-16 01:36:15.791268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.220 [2024-12-16 01:36:15.791346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:45.220 [2024-12-16 01:36:15.791373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.220 [2024-12-16 01:36:15.791387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:45.220 [2024-12-16 01:36:15.791733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.220 [2024-12-16 01:36:15.791757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:45.220 [2024-12-16 01:36:15.791777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.221 [2024-12-16 01:36:15.791789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:45.221 [2024-12-16 01:36:15.792087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.221 [2024-12-16 01:36:15.792107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:45.221 [2024-12-16 01:36:15.792128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.221 [2024-12-16 01:36:15.792140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:45.221 [2024-12-16 01:36:15.792445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.221 [2024-12-16 01:36:15.792467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:45.221 [2024-12-16 01:36:15.792487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:45.221 [2024-12-16 01:36:15.792500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:45.221 passed 00:14:45.221 Test: blockdev nvme passthru rw ...passed 00:14:45.221 Test: blockdev nvme passthru vendor specific ...[2024-12-16 01:36:15.794032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.221 [2024-12-16 01:36:15.794300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:45.221 [2024-12-16 01:36:15.794445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.221 [2024-12-16 01:36:15.794466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:45.221 [2024-12-16 01:36:15.794596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.221 [2024-12-16 01:36:15.794618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:45.221 [2024-12-16 01:36:15.794738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.221 passed 00:14:45.221 Test: blockdev nvme admin passthru ...[2024-12-16 01:36:15.794764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:45.221 passed 00:14:45.221 Test: blockdev copy ...passed 00:14:45.221 00:14:45.221 Run Summary: Type Total Ran Passed Failed Inactive 00:14:45.221 suites 1 1 n/a 0 0 00:14:45.221 tests 23 23 23 0 0 00:14:45.221 asserts 152 152 152 0 n/a 00:14:45.221 00:14:45.221 Elapsed time = 0.177 seconds 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:45.480 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:45.739 rmmod nvme_tcp 00:14:45.739 rmmod nvme_fabrics 00:14:45.739 rmmod nvme_keyring 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 85570 ']' 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 85570 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 85570 ']' 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 85570 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85570 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85570' 00:14:45.739 killing process with pid 85570 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 85570 00:14:45.739 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 85570 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:45.999 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:46.258 00:14:46.258 real 0m3.352s 00:14:46.258 user 0m10.160s 00:14:46.258 sys 0m1.273s 00:14:46.258 ************************************ 00:14:46.258 END TEST nvmf_bdevio_no_huge 00:14:46.258 ************************************ 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.258 ************************************ 00:14:46.258 START TEST nvmf_tls 00:14:46.258 ************************************ 00:14:46.258 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:46.519 * Looking for test storage... 00:14:46.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:46.519 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:46.519 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:14:46.519 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:46.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.519 --rc genhtml_branch_coverage=1 00:14:46.519 --rc genhtml_function_coverage=1 00:14:46.519 --rc genhtml_legend=1 00:14:46.519 --rc geninfo_all_blocks=1 00:14:46.519 --rc geninfo_unexecuted_blocks=1 00:14:46.519 00:14:46.519 ' 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:46.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.519 --rc genhtml_branch_coverage=1 00:14:46.519 --rc genhtml_function_coverage=1 00:14:46.519 --rc genhtml_legend=1 00:14:46.519 --rc geninfo_all_blocks=1 00:14:46.519 --rc geninfo_unexecuted_blocks=1 00:14:46.519 00:14:46.519 ' 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:46.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.519 --rc genhtml_branch_coverage=1 00:14:46.519 --rc genhtml_function_coverage=1 00:14:46.519 --rc genhtml_legend=1 00:14:46.519 --rc geninfo_all_blocks=1 00:14:46.519 --rc geninfo_unexecuted_blocks=1 00:14:46.519 00:14:46.519 ' 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:46.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.519 --rc genhtml_branch_coverage=1 00:14:46.519 --rc genhtml_function_coverage=1 00:14:46.519 --rc genhtml_legend=1 00:14:46.519 --rc geninfo_all_blocks=1 00:14:46.519 --rc geninfo_unexecuted_blocks=1 00:14:46.519 00:14:46.519 ' 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.519 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.520 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:46.520 Cannot find device "nvmf_init_br" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:46.520 Cannot find device "nvmf_init_br2" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:46.520 Cannot find device "nvmf_tgt_br" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.520 Cannot find device "nvmf_tgt_br2" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:46.520 Cannot find device "nvmf_init_br" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:46.520 Cannot find device "nvmf_init_br2" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:46.520 Cannot find device "nvmf_tgt_br" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:46.520 Cannot find device "nvmf_tgt_br2" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:46.520 Cannot find device "nvmf_br" 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:46.520 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:46.780 Cannot find device "nvmf_init_if" 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:46.780 Cannot find device "nvmf_init_if2" 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.780 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.780 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:47.039 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:47.039 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:47.039 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:47.039 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:47.039 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:47.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:47.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:14:47.039 00:14:47.039 --- 10.0.0.3 ping statistics --- 00:14:47.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.039 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:47.040 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:47.040 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:14:47.040 00:14:47.040 --- 10.0.0.4 ping statistics --- 00:14:47.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.040 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:47.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:47.040 00:14:47.040 --- 10.0.0.1 ping statistics --- 00:14:47.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.040 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:47.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:14:47.040 00:14:47.040 --- 10.0.0.2 ping statistics --- 00:14:47.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.040 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85850 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85850 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85850 ']' 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:47.040 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.040 [2024-12-16 01:36:17.555153] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:14:47.040 [2024-12-16 01:36:17.555476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.300 [2024-12-16 01:36:17.715667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.300 [2024-12-16 01:36:17.738439] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.300 [2024-12-16 01:36:17.738501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.300 [2024-12-16 01:36:17.738519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.300 [2024-12-16 01:36:17.738560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.300 [2024-12-16 01:36:17.738570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.300 [2024-12-16 01:36:17.738929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:47.300 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:47.559 true 00:14:47.559 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:47.559 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:48.128 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:48.128 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:48.128 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:48.128 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:48.128 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.387 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:48.387 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:48.387 01:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:48.982 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.982 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:48.982 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:48.982 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:48.982 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:48.982 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:49.242 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:49.242 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:49.242 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:49.808 01:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:49.808 01:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:50.066 01:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:50.066 01:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:50.066 01:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:50.325 01:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:50.325 01:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:50.892 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.tFqdnZXmhj 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Ivm3DZtqMf 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tFqdnZXmhj 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Ivm3DZtqMf 00:14:50.893 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:51.152 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:51.411 [2024-12-16 01:36:22.046822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.669 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.tFqdnZXmhj 00:14:51.669 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tFqdnZXmhj 00:14:51.669 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:51.928 [2024-12-16 01:36:22.355168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.928 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:52.187 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:52.187 [2024-12-16 01:36:22.823303] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:52.187 [2024-12-16 01:36:22.823579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:52.447 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:52.706 malloc0 00:14:52.706 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:52.966 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tFqdnZXmhj 00:14:53.225 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:53.484 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tFqdnZXmhj 00:15:03.461 Initializing NVMe Controllers 00:15:03.461 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:03.461 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:03.461 Initialization complete. Launching workers. 00:15:03.461 ======================================================== 00:15:03.461 Latency(us) 00:15:03.461 Device Information : IOPS MiB/s Average min max 00:15:03.461 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9592.36 37.47 6673.64 1067.82 12849.97 00:15:03.461 ======================================================== 00:15:03.461 Total : 9592.36 37.47 6673.64 1067.82 12849.97 00:15:03.461 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tFqdnZXmhj 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tFqdnZXmhj 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86081 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86081 /var/tmp/bdevperf.sock 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86081 ']' 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:03.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.461 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.721 [2024-12-16 01:36:34.155630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:03.721 [2024-12-16 01:36:34.156245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86081 ] 00:15:03.721 [2024-12-16 01:36:34.304030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.721 [2024-12-16 01:36:34.326597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.721 [2024-12-16 01:36:34.358936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.979 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.979 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:03.979 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tFqdnZXmhj 00:15:04.546 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:04.803 [2024-12-16 01:36:35.266081] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:04.803 TLSTESTn1 00:15:04.803 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:05.061 Running I/O for 10 seconds... 00:15:06.935 3589.00 IOPS, 14.02 MiB/s [2024-12-16T01:36:38.529Z] 3967.00 IOPS, 15.50 MiB/s [2024-12-16T01:36:39.906Z] 4070.00 IOPS, 15.90 MiB/s [2024-12-16T01:36:40.842Z] 4154.00 IOPS, 16.23 MiB/s [2024-12-16T01:36:41.777Z] 4216.40 IOPS, 16.47 MiB/s [2024-12-16T01:36:42.711Z] 4256.33 IOPS, 16.63 MiB/s [2024-12-16T01:36:43.648Z] 4279.71 IOPS, 16.72 MiB/s [2024-12-16T01:36:44.584Z] 4301.12 IOPS, 16.80 MiB/s [2024-12-16T01:36:45.520Z] 4316.78 IOPS, 16.86 MiB/s [2024-12-16T01:36:45.779Z] 4327.30 IOPS, 16.90 MiB/s 00:15:15.121 Latency(us) 00:15:15.121 [2024-12-16T01:36:45.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.121 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:15.121 Verification LBA range: start 0x0 length 0x2000 00:15:15.121 TLSTESTn1 : 10.01 4333.34 16.93 0.00 0.00 29486.49 4230.05 31695.59 00:15:15.121 [2024-12-16T01:36:45.779Z] =================================================================================================================== 00:15:15.121 [2024-12-16T01:36:45.779Z] Total : 4333.34 16.93 0.00 0.00 29486.49 4230.05 31695.59 00:15:15.121 { 00:15:15.121 "results": [ 00:15:15.121 { 00:15:15.121 "job": "TLSTESTn1", 00:15:15.121 "core_mask": "0x4", 00:15:15.121 "workload": "verify", 00:15:15.121 "status": "finished", 00:15:15.121 "verify_range": { 00:15:15.121 "start": 0, 00:15:15.121 "length": 8192 00:15:15.121 }, 00:15:15.121 "queue_depth": 128, 00:15:15.121 "io_size": 4096, 00:15:15.121 "runtime": 10.014905, 00:15:15.121 "iops": 4333.34115500846, 00:15:15.121 "mibps": 16.927113886751798, 00:15:15.121 "io_failed": 0, 00:15:15.121 "io_timeout": 0, 00:15:15.121 "avg_latency_us": 29486.4939304283, 00:15:15.121 "min_latency_us": 4230.050909090909, 00:15:15.121 "max_latency_us": 31695.592727272728 00:15:15.121 } 00:15:15.121 ], 00:15:15.121 "core_count": 1 00:15:15.121 } 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 86081 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86081 ']' 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86081 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86081 00:15:15.121 killing process with pid 86081 00:15:15.121 Received shutdown signal, test time was about 10.000000 seconds 00:15:15.121 00:15:15.121 Latency(us) 00:15:15.121 [2024-12-16T01:36:45.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.121 [2024-12-16T01:36:45.779Z] =================================================================================================================== 00:15:15.121 [2024-12-16T01:36:45.779Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86081' 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86081 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86081 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ivm3DZtqMf 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ivm3DZtqMf 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:15.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ivm3DZtqMf 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:15.121 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ivm3DZtqMf 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86213 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86213 /var/tmp/bdevperf.sock 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86213 ']' 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.122 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.122 [2024-12-16 01:36:45.766900] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:15.122 [2024-12-16 01:36:45.767213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86213 ] 00:15:15.381 [2024-12-16 01:36:45.916705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.381 [2024-12-16 01:36:45.937433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:15.381 [2024-12-16 01:36:45.967649] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.381 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.381 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:15.381 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ivm3DZtqMf 00:15:15.640 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:15.899 [2024-12-16 01:36:46.545423] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:15.899 [2024-12-16 01:36:46.552716] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:15.899 [2024-12-16 01:36:46.553336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12efd30 (107): Transport endpoint is not connected 00:15:15.899 [2024-12-16 01:36:46.554322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12efd30 (9): Bad file descriptor 00:15:15.899 [2024-12-16 01:36:46.555301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:15.899 [2024-12-16 01:36:46.555809] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:16.160 [2024-12-16 01:36:46.556066] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:16.160 [2024-12-16 01:36:46.556515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:16.160 request: 00:15:16.160 { 00:15:16.160 "name": "TLSTEST", 00:15:16.160 "trtype": "tcp", 00:15:16.160 "traddr": "10.0.0.3", 00:15:16.160 "adrfam": "ipv4", 00:15:16.160 "trsvcid": "4420", 00:15:16.160 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.160 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:16.160 "prchk_reftag": false, 00:15:16.160 "prchk_guard": false, 00:15:16.160 "hdgst": false, 00:15:16.160 "ddgst": false, 00:15:16.160 "psk": "key0", 00:15:16.160 "allow_unrecognized_csi": false, 00:15:16.160 "method": "bdev_nvme_attach_controller", 00:15:16.160 "req_id": 1 00:15:16.160 } 00:15:16.160 Got JSON-RPC error response 00:15:16.160 response: 00:15:16.160 { 00:15:16.160 "code": -5, 00:15:16.160 "message": "Input/output error" 00:15:16.160 } 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86213 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86213 ']' 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86213 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86213 00:15:16.160 killing process with pid 86213 00:15:16.160 Received shutdown signal, test time was about 10.000000 seconds 00:15:16.160 00:15:16.160 Latency(us) 00:15:16.160 [2024-12-16T01:36:46.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.160 [2024-12-16T01:36:46.818Z] =================================================================================================================== 00:15:16.160 [2024-12-16T01:36:46.818Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86213' 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86213 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86213 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tFqdnZXmhj 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tFqdnZXmhj 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tFqdnZXmhj 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:16.160 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tFqdnZXmhj 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86234 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86234 /var/tmp/bdevperf.sock 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86234 ']' 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.161 01:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.161 [2024-12-16 01:36:46.782679] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:16.161 [2024-12-16 01:36:46.782946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86234 ] 00:15:16.448 [2024-12-16 01:36:46.920438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.448 [2024-12-16 01:36:46.942196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.448 [2024-12-16 01:36:46.973050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.448 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.448 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:16.448 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tFqdnZXmhj 00:15:16.722 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:15:16.981 [2024-12-16 01:36:47.594871] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:16.981 [2024-12-16 01:36:47.602702] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:16.981 [2024-12-16 01:36:47.602740] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:16.981 [2024-12-16 01:36:47.602803] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:16.981 [2024-12-16 01:36:47.603649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cdd30 (107): Transport endpoint is not connected 00:15:16.981 [2024-12-16 01:36:47.604635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cdd30 (9): Bad file descriptor 00:15:16.981 [2024-12-16 01:36:47.605631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:16.981 [2024-12-16 01:36:47.605677] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:16.981 [2024-12-16 01:36:47.605688] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:16.981 [2024-12-16 01:36:47.605698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:16.981 request: 00:15:16.981 { 00:15:16.981 "name": "TLSTEST", 00:15:16.981 "trtype": "tcp", 00:15:16.981 "traddr": "10.0.0.3", 00:15:16.981 "adrfam": "ipv4", 00:15:16.982 "trsvcid": "4420", 00:15:16.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.982 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:16.982 "prchk_reftag": false, 00:15:16.982 "prchk_guard": false, 00:15:16.982 "hdgst": false, 00:15:16.982 "ddgst": false, 00:15:16.982 "psk": "key0", 00:15:16.982 "allow_unrecognized_csi": false, 00:15:16.982 "method": "bdev_nvme_attach_controller", 00:15:16.982 "req_id": 1 00:15:16.982 } 00:15:16.982 Got JSON-RPC error response 00:15:16.982 response: 00:15:16.982 { 00:15:16.982 "code": -5, 00:15:16.982 "message": "Input/output error" 00:15:16.982 } 00:15:16.982 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86234 00:15:16.982 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86234 ']' 00:15:16.982 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86234 00:15:16.982 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:16.982 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.982 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86234 00:15:17.242 killing process with pid 86234 00:15:17.242 Received shutdown signal, test time was about 10.000000 seconds 00:15:17.242 00:15:17.242 Latency(us) 00:15:17.242 [2024-12-16T01:36:47.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.242 [2024-12-16T01:36:47.900Z] =================================================================================================================== 00:15:17.242 [2024-12-16T01:36:47.900Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86234' 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86234 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86234 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tFqdnZXmhj 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tFqdnZXmhj 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tFqdnZXmhj 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tFqdnZXmhj 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86255 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86255 /var/tmp/bdevperf.sock 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86255 ']' 00:15:17.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.242 01:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:17.242 [2024-12-16 01:36:47.837671] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:17.242 [2024-12-16 01:36:47.838305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86255 ] 00:15:17.502 [2024-12-16 01:36:47.989819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.502 [2024-12-16 01:36:48.013481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.502 [2024-12-16 01:36:48.046228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:17.502 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.502 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:17.502 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tFqdnZXmhj 00:15:17.761 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:18.020 [2024-12-16 01:36:48.637102] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:18.020 [2024-12-16 01:36:48.642865] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:18.020 [2024-12-16 01:36:48.642908] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:18.020 [2024-12-16 01:36:48.642959] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:18.020 [2024-12-16 01:36:48.643239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbd30 (107): Transport endpoint is not connected 00:15:18.020 [2024-12-16 01:36:48.644296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbcbd30 (9): Bad file descriptor 00:15:18.020 [2024-12-16 01:36:48.645291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:15:18.020 [2024-12-16 01:36:48.645658] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:15:18.020 [2024-12-16 01:36:48.645675] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:15:18.020 [2024-12-16 01:36:48.645687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:15:18.020 request: 00:15:18.020 { 00:15:18.020 "name": "TLSTEST", 00:15:18.020 "trtype": "tcp", 00:15:18.020 "traddr": "10.0.0.3", 00:15:18.020 "adrfam": "ipv4", 00:15:18.020 "trsvcid": "4420", 00:15:18.020 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:18.020 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.020 "prchk_reftag": false, 00:15:18.020 "prchk_guard": false, 00:15:18.020 "hdgst": false, 00:15:18.020 "ddgst": false, 00:15:18.020 "psk": "key0", 00:15:18.020 "allow_unrecognized_csi": false, 00:15:18.020 "method": "bdev_nvme_attach_controller", 00:15:18.020 "req_id": 1 00:15:18.020 } 00:15:18.020 Got JSON-RPC error response 00:15:18.020 response: 00:15:18.020 { 00:15:18.020 "code": -5, 00:15:18.020 "message": "Input/output error" 00:15:18.020 } 00:15:18.020 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86255 00:15:18.020 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86255 ']' 00:15:18.020 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86255 00:15:18.020 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:18.020 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:18.020 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86255 00:15:18.280 killing process with pid 86255 00:15:18.280 Received shutdown signal, test time was about 10.000000 seconds 00:15:18.280 00:15:18.280 Latency(us) 00:15:18.280 [2024-12-16T01:36:48.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.280 [2024-12-16T01:36:48.938Z] =================================================================================================================== 00:15:18.280 [2024-12-16T01:36:48.938Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86255' 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86255 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86255 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:18.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86275 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86275 /var/tmp/bdevperf.sock 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86275 ']' 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.280 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.281 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:18.281 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.281 01:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.281 [2024-12-16 01:36:48.892228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:18.281 [2024-12-16 01:36:48.892327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86275 ] 00:15:18.539 [2024-12-16 01:36:49.041194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.539 [2024-12-16 01:36:49.062334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.539 [2024-12-16 01:36:49.091774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:18.539 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.539 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:18.539 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:15:18.798 [2024-12-16 01:36:49.408672] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:15:18.798 [2024-12-16 01:36:49.408722] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:18.798 request: 00:15:18.798 { 00:15:18.798 "name": "key0", 00:15:18.798 "path": "", 00:15:18.798 "method": "keyring_file_add_key", 00:15:18.798 "req_id": 1 00:15:18.798 } 00:15:18.798 Got JSON-RPC error response 00:15:18.798 response: 00:15:18.798 { 00:15:18.798 "code": -1, 00:15:18.798 "message": "Operation not permitted" 00:15:18.798 } 00:15:18.798 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:19.057 [2024-12-16 01:36:49.677790] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:19.057 [2024-12-16 01:36:49.677864] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:19.057 request: 00:15:19.057 { 00:15:19.057 "name": "TLSTEST", 00:15:19.057 "trtype": "tcp", 00:15:19.057 "traddr": "10.0.0.3", 00:15:19.057 "adrfam": "ipv4", 00:15:19.057 "trsvcid": "4420", 00:15:19.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:19.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:19.057 "prchk_reftag": false, 00:15:19.057 "prchk_guard": false, 00:15:19.057 "hdgst": false, 00:15:19.057 "ddgst": false, 00:15:19.057 "psk": "key0", 00:15:19.057 "allow_unrecognized_csi": false, 00:15:19.057 "method": "bdev_nvme_attach_controller", 00:15:19.057 "req_id": 1 00:15:19.057 } 00:15:19.057 Got JSON-RPC error response 00:15:19.057 response: 00:15:19.057 { 00:15:19.057 "code": -126, 00:15:19.057 "message": "Required key not available" 00:15:19.057 } 00:15:19.057 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86275 00:15:19.057 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86275 ']' 00:15:19.057 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86275 00:15:19.057 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.057 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.057 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86275 00:15:19.317 killing process with pid 86275 00:15:19.317 Received shutdown signal, test time was about 10.000000 seconds 00:15:19.317 00:15:19.317 Latency(us) 00:15:19.317 [2024-12-16T01:36:49.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:19.317 [2024-12-16T01:36:49.975Z] =================================================================================================================== 00:15:19.317 [2024-12-16T01:36:49.975Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86275' 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86275 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86275 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 85850 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85850 ']' 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85850 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85850 00:15:19.317 killing process with pid 85850 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85850' 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85850 00:15:19.317 01:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85850 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.2AKhlA2Ph1 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.2AKhlA2Ph1 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86313 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86313 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86313 ']' 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.576 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.576 [2024-12-16 01:36:50.134817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:19.576 [2024-12-16 01:36:50.134928] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.835 [2024-12-16 01:36:50.288093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.835 [2024-12-16 01:36:50.310283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.835 [2024-12-16 01:36:50.310356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.835 [2024-12-16 01:36:50.310370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.835 [2024-12-16 01:36:50.310380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.835 [2024-12-16 01:36:50.310390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.835 [2024-12-16 01:36:50.310757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.835 [2024-12-16 01:36:50.342953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.2AKhlA2Ph1 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2AKhlA2Ph1 00:15:19.835 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:20.094 [2024-12-16 01:36:50.721950] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.094 01:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:20.661 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:20.661 [2024-12-16 01:36:51.314094] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:20.661 [2024-12-16 01:36:51.314331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.920 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:21.178 malloc0 00:15:21.178 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:21.436 01:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:21.694 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2AKhlA2Ph1 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2AKhlA2Ph1 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86361 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86361 /var/tmp/bdevperf.sock 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86361 ']' 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.952 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.952 [2024-12-16 01:36:52.483704] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:21.952 [2024-12-16 01:36:52.483967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86361 ] 00:15:22.211 [2024-12-16 01:36:52.633892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.211 [2024-12-16 01:36:52.658827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.211 [2024-12-16 01:36:52.693040] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.211 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.211 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:22.211 01:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:22.469 01:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:22.728 [2024-12-16 01:36:53.312708] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:22.728 TLSTESTn1 00:15:22.987 01:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:22.987 Running I/O for 10 seconds... 00:15:25.297 3960.00 IOPS, 15.47 MiB/s [2024-12-16T01:36:56.890Z] 3980.50 IOPS, 15.55 MiB/s [2024-12-16T01:36:57.826Z] 3990.00 IOPS, 15.59 MiB/s [2024-12-16T01:36:58.796Z] 4029.50 IOPS, 15.74 MiB/s [2024-12-16T01:36:59.758Z] 4054.20 IOPS, 15.84 MiB/s [2024-12-16T01:37:00.696Z] 4069.50 IOPS, 15.90 MiB/s [2024-12-16T01:37:01.634Z] 4092.43 IOPS, 15.99 MiB/s [2024-12-16T01:37:02.574Z] 4120.12 IOPS, 16.09 MiB/s [2024-12-16T01:37:03.951Z] 4137.56 IOPS, 16.16 MiB/s [2024-12-16T01:37:03.951Z] 4171.10 IOPS, 16.29 MiB/s 00:15:33.293 Latency(us) 00:15:33.293 [2024-12-16T01:37:03.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.293 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:33.293 Verification LBA range: start 0x0 length 0x2000 00:15:33.293 TLSTESTn1 : 10.01 4177.87 16.32 0.00 0.00 30586.12 4319.42 26691.03 00:15:33.293 [2024-12-16T01:37:03.951Z] =================================================================================================================== 00:15:33.293 [2024-12-16T01:37:03.951Z] Total : 4177.87 16.32 0.00 0.00 30586.12 4319.42 26691.03 00:15:33.293 { 00:15:33.293 "results": [ 00:15:33.293 { 00:15:33.293 "job": "TLSTESTn1", 00:15:33.293 "core_mask": "0x4", 00:15:33.293 "workload": "verify", 00:15:33.293 "status": "finished", 00:15:33.293 "verify_range": { 00:15:33.293 "start": 0, 00:15:33.293 "length": 8192 00:15:33.293 }, 00:15:33.293 "queue_depth": 128, 00:15:33.293 "io_size": 4096, 00:15:33.293 "runtime": 10.013945, 00:15:33.293 "iops": 4177.873954770073, 00:15:33.293 "mibps": 16.3198201358206, 00:15:33.293 "io_failed": 0, 00:15:33.293 "io_timeout": 0, 00:15:33.293 "avg_latency_us": 30586.118731440416, 00:15:33.293 "min_latency_us": 4319.418181818181, 00:15:33.293 "max_latency_us": 26691.025454545455 00:15:33.293 } 00:15:33.293 ], 00:15:33.293 "core_count": 1 00:15:33.293 } 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 86361 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86361 ']' 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86361 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86361 00:15:33.293 killing process with pid 86361 00:15:33.293 Received shutdown signal, test time was about 10.000000 seconds 00:15:33.293 00:15:33.293 Latency(us) 00:15:33.293 [2024-12-16T01:37:03.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.293 [2024-12-16T01:37:03.951Z] =================================================================================================================== 00:15:33.293 [2024-12-16T01:37:03.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86361' 00:15:33.293 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86361 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86361 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.2AKhlA2Ph1 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2AKhlA2Ph1 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2AKhlA2Ph1 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2AKhlA2Ph1 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.2AKhlA2Ph1 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=86489 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 86489 /var/tmp/bdevperf.sock 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86489 ']' 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.294 01:37:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.294 [2024-12-16 01:37:03.825948] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:33.294 [2024-12-16 01:37:03.826336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86489 ] 00:15:33.553 [2024-12-16 01:37:03.980268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.553 [2024-12-16 01:37:04.004989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.553 [2024-12-16 01:37:04.040961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.553 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.553 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:33.553 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:33.812 [2024-12-16 01:37:04.349764] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2AKhlA2Ph1': 0100666 00:15:33.812 [2024-12-16 01:37:04.349820] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:33.812 request: 00:15:33.812 { 00:15:33.812 "name": "key0", 00:15:33.812 "path": "/tmp/tmp.2AKhlA2Ph1", 00:15:33.812 "method": "keyring_file_add_key", 00:15:33.812 "req_id": 1 00:15:33.812 } 00:15:33.812 Got JSON-RPC error response 00:15:33.812 response: 00:15:33.812 { 00:15:33.812 "code": -1, 00:15:33.812 "message": "Operation not permitted" 00:15:33.812 } 00:15:33.812 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:34.071 [2024-12-16 01:37:04.666020] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:34.072 [2024-12-16 01:37:04.666090] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:34.072 request: 00:15:34.072 { 00:15:34.072 "name": "TLSTEST", 00:15:34.072 "trtype": "tcp", 00:15:34.072 "traddr": "10.0.0.3", 00:15:34.072 "adrfam": "ipv4", 00:15:34.072 "trsvcid": "4420", 00:15:34.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.072 "prchk_reftag": false, 00:15:34.072 "prchk_guard": false, 00:15:34.072 "hdgst": false, 00:15:34.072 "ddgst": false, 00:15:34.072 "psk": "key0", 00:15:34.072 "allow_unrecognized_csi": false, 00:15:34.072 "method": "bdev_nvme_attach_controller", 00:15:34.072 "req_id": 1 00:15:34.072 } 00:15:34.072 Got JSON-RPC error response 00:15:34.072 response: 00:15:34.072 { 00:15:34.072 "code": -126, 00:15:34.072 "message": "Required key not available" 00:15:34.072 } 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 86489 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86489 ']' 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86489 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86489 00:15:34.072 killing process with pid 86489 00:15:34.072 Received shutdown signal, test time was about 10.000000 seconds 00:15:34.072 00:15:34.072 Latency(us) 00:15:34.072 [2024-12-16T01:37:04.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.072 [2024-12-16T01:37:04.730Z] =================================================================================================================== 00:15:34.072 [2024-12-16T01:37:04.730Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86489' 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86489 00:15:34.072 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86489 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 86313 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86313 ']' 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86313 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86313 00:15:34.332 killing process with pid 86313 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86313' 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86313 00:15:34.332 01:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86313 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86515 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86515 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86515 ']' 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.591 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.591 [2024-12-16 01:37:05.094301] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:34.591 [2024-12-16 01:37:05.094615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.591 [2024-12-16 01:37:05.245071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.850 [2024-12-16 01:37:05.265321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.850 [2024-12-16 01:37:05.265376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.850 [2024-12-16 01:37:05.265388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.850 [2024-12-16 01:37:05.265396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.850 [2024-12-16 01:37:05.265404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.850 [2024-12-16 01:37:05.265716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.850 [2024-12-16 01:37:05.299261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:34.850 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.2AKhlA2Ph1 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.2AKhlA2Ph1 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.2AKhlA2Ph1 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2AKhlA2Ph1 00:15:34.851 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:35.110 [2024-12-16 01:37:05.681228] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.110 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:35.369 01:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:35.629 [2024-12-16 01:37:06.189327] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:35.629 [2024-12-16 01:37:06.189531] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.629 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:35.888 malloc0 00:15:35.888 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:36.148 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:36.407 [2024-12-16 01:37:06.954748] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.2AKhlA2Ph1': 0100666 00:15:36.407 [2024-12-16 01:37:06.955011] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:36.407 request: 00:15:36.407 { 00:15:36.407 "name": "key0", 00:15:36.407 "path": "/tmp/tmp.2AKhlA2Ph1", 00:15:36.407 "method": "keyring_file_add_key", 00:15:36.407 "req_id": 1 00:15:36.407 } 00:15:36.407 Got JSON-RPC error response 00:15:36.407 response: 00:15:36.407 { 00:15:36.407 "code": -1, 00:15:36.407 "message": "Operation not permitted" 00:15:36.407 } 00:15:36.407 01:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:36.667 [2024-12-16 01:37:07.206856] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:36.667 [2024-12-16 01:37:07.207158] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:36.667 request: 00:15:36.667 { 00:15:36.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.667 "host": "nqn.2016-06.io.spdk:host1", 00:15:36.667 "psk": "key0", 00:15:36.667 "method": "nvmf_subsystem_add_host", 00:15:36.667 "req_id": 1 00:15:36.667 } 00:15:36.667 Got JSON-RPC error response 00:15:36.667 response: 00:15:36.667 { 00:15:36.667 "code": -32603, 00:15:36.667 "message": "Internal error" 00:15:36.667 } 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 86515 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86515 ']' 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86515 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86515 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86515' 00:15:36.667 killing process with pid 86515 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86515 00:15:36.667 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86515 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.2AKhlA2Ph1 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86577 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86577 00:15:36.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86577 ']' 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.926 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.926 [2024-12-16 01:37:07.465706] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:36.926 [2024-12-16 01:37:07.465958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.186 [2024-12-16 01:37:07.613411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.186 [2024-12-16 01:37:07.634245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.186 [2024-12-16 01:37:07.634330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.186 [2024-12-16 01:37:07.634357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.186 [2024-12-16 01:37:07.634379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.186 [2024-12-16 01:37:07.634386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.186 [2024-12-16 01:37:07.634722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.186 [2024-12-16 01:37:07.663975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.2AKhlA2Ph1 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2AKhlA2Ph1 00:15:37.186 01:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:37.445 [2024-12-16 01:37:08.012060] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.445 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:37.705 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:37.965 [2024-12-16 01:37:08.492226] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:37.965 [2024-12-16 01:37:08.492465] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:37.965 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:38.224 malloc0 00:15:38.224 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:38.483 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:38.742 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:39.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=86625 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 86625 /var/tmp/bdevperf.sock 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86625 ']' 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.002 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:39.002 [2024-12-16 01:37:09.558351] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:39.002 [2024-12-16 01:37:09.558834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86625 ] 00:15:39.262 [2024-12-16 01:37:09.717494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.262 [2024-12-16 01:37:09.742933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.262 [2024-12-16 01:37:09.778140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:40.200 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.200 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:40.200 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:40.200 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:40.459 [2024-12-16 01:37:10.946911] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:40.459 TLSTESTn1 00:15:40.459 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:41.028 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:41.028 "subsystems": [ 00:15:41.028 { 00:15:41.028 "subsystem": "keyring", 00:15:41.028 "config": [ 00:15:41.028 { 00:15:41.028 "method": "keyring_file_add_key", 00:15:41.028 "params": { 00:15:41.028 "name": "key0", 00:15:41.028 "path": "/tmp/tmp.2AKhlA2Ph1" 00:15:41.028 } 00:15:41.028 } 00:15:41.028 ] 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "subsystem": "iobuf", 00:15:41.028 "config": [ 00:15:41.028 { 00:15:41.028 "method": "iobuf_set_options", 00:15:41.028 "params": { 00:15:41.028 "small_pool_count": 8192, 00:15:41.028 "large_pool_count": 1024, 00:15:41.028 "small_bufsize": 8192, 00:15:41.028 "large_bufsize": 135168, 00:15:41.028 "enable_numa": false 00:15:41.028 } 00:15:41.028 } 00:15:41.028 ] 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "subsystem": "sock", 00:15:41.028 "config": [ 00:15:41.028 { 00:15:41.028 "method": "sock_set_default_impl", 00:15:41.028 "params": { 00:15:41.028 "impl_name": "uring" 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "sock_impl_set_options", 00:15:41.028 "params": { 00:15:41.028 "impl_name": "ssl", 00:15:41.028 "recv_buf_size": 4096, 00:15:41.028 "send_buf_size": 4096, 00:15:41.028 "enable_recv_pipe": true, 00:15:41.028 "enable_quickack": false, 00:15:41.028 "enable_placement_id": 0, 00:15:41.028 "enable_zerocopy_send_server": true, 00:15:41.028 "enable_zerocopy_send_client": false, 00:15:41.028 "zerocopy_threshold": 0, 00:15:41.028 "tls_version": 0, 00:15:41.028 "enable_ktls": false 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "sock_impl_set_options", 00:15:41.028 "params": { 00:15:41.028 "impl_name": "posix", 00:15:41.028 "recv_buf_size": 2097152, 00:15:41.028 "send_buf_size": 2097152, 00:15:41.028 "enable_recv_pipe": true, 00:15:41.028 "enable_quickack": false, 00:15:41.028 "enable_placement_id": 0, 00:15:41.028 "enable_zerocopy_send_server": true, 00:15:41.028 "enable_zerocopy_send_client": false, 00:15:41.028 "zerocopy_threshold": 0, 00:15:41.028 "tls_version": 0, 00:15:41.028 "enable_ktls": false 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "sock_impl_set_options", 00:15:41.028 "params": { 00:15:41.028 "impl_name": "uring", 00:15:41.028 "recv_buf_size": 2097152, 00:15:41.028 "send_buf_size": 2097152, 00:15:41.028 "enable_recv_pipe": true, 00:15:41.028 "enable_quickack": false, 00:15:41.028 "enable_placement_id": 0, 00:15:41.028 "enable_zerocopy_send_server": false, 00:15:41.028 "enable_zerocopy_send_client": false, 00:15:41.028 "zerocopy_threshold": 0, 00:15:41.028 "tls_version": 0, 00:15:41.028 "enable_ktls": false 00:15:41.028 } 00:15:41.028 } 00:15:41.028 ] 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "subsystem": "vmd", 00:15:41.028 "config": [] 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "subsystem": "accel", 00:15:41.028 "config": [ 00:15:41.028 { 00:15:41.028 "method": "accel_set_options", 00:15:41.028 "params": { 00:15:41.028 "small_cache_size": 128, 00:15:41.028 "large_cache_size": 16, 00:15:41.028 "task_count": 2048, 00:15:41.028 "sequence_count": 2048, 00:15:41.028 "buf_count": 2048 00:15:41.028 } 00:15:41.028 } 00:15:41.028 ] 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "subsystem": "bdev", 00:15:41.028 "config": [ 00:15:41.028 { 00:15:41.028 "method": "bdev_set_options", 00:15:41.028 "params": { 00:15:41.028 "bdev_io_pool_size": 65535, 00:15:41.028 "bdev_io_cache_size": 256, 00:15:41.028 "bdev_auto_examine": true, 00:15:41.028 "iobuf_small_cache_size": 128, 00:15:41.028 "iobuf_large_cache_size": 16 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "bdev_raid_set_options", 00:15:41.028 "params": { 00:15:41.028 "process_window_size_kb": 1024, 00:15:41.028 "process_max_bandwidth_mb_sec": 0 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "bdev_iscsi_set_options", 00:15:41.028 "params": { 00:15:41.028 "timeout_sec": 30 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "bdev_nvme_set_options", 00:15:41.028 "params": { 00:15:41.028 "action_on_timeout": "none", 00:15:41.028 "timeout_us": 0, 00:15:41.028 "timeout_admin_us": 0, 00:15:41.028 "keep_alive_timeout_ms": 10000, 00:15:41.028 "arbitration_burst": 0, 00:15:41.028 "low_priority_weight": 0, 00:15:41.028 "medium_priority_weight": 0, 00:15:41.028 "high_priority_weight": 0, 00:15:41.028 "nvme_adminq_poll_period_us": 10000, 00:15:41.028 "nvme_ioq_poll_period_us": 0, 00:15:41.028 "io_queue_requests": 0, 00:15:41.028 "delay_cmd_submit": true, 00:15:41.028 "transport_retry_count": 4, 00:15:41.028 "bdev_retry_count": 3, 00:15:41.028 "transport_ack_timeout": 0, 00:15:41.028 "ctrlr_loss_timeout_sec": 0, 00:15:41.028 "reconnect_delay_sec": 0, 00:15:41.028 "fast_io_fail_timeout_sec": 0, 00:15:41.028 "disable_auto_failback": false, 00:15:41.028 "generate_uuids": false, 00:15:41.028 "transport_tos": 0, 00:15:41.028 "nvme_error_stat": false, 00:15:41.028 "rdma_srq_size": 0, 00:15:41.028 "io_path_stat": false, 00:15:41.028 "allow_accel_sequence": false, 00:15:41.028 "rdma_max_cq_size": 0, 00:15:41.028 "rdma_cm_event_timeout_ms": 0, 00:15:41.028 "dhchap_digests": [ 00:15:41.028 "sha256", 00:15:41.028 "sha384", 00:15:41.028 "sha512" 00:15:41.028 ], 00:15:41.028 "dhchap_dhgroups": [ 00:15:41.028 "null", 00:15:41.028 "ffdhe2048", 00:15:41.028 "ffdhe3072", 00:15:41.028 "ffdhe4096", 00:15:41.028 "ffdhe6144", 00:15:41.028 "ffdhe8192" 00:15:41.028 ], 00:15:41.028 "rdma_umr_per_io": false 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "bdev_nvme_set_hotplug", 00:15:41.028 "params": { 00:15:41.028 "period_us": 100000, 00:15:41.028 "enable": false 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.028 "method": "bdev_malloc_create", 00:15:41.028 "params": { 00:15:41.028 "name": "malloc0", 00:15:41.028 "num_blocks": 8192, 00:15:41.028 "block_size": 4096, 00:15:41.028 "physical_block_size": 4096, 00:15:41.028 "uuid": "a6c986a3-ba9a-4827-b0ef-e114c725a30b", 00:15:41.028 "optimal_io_boundary": 0, 00:15:41.028 "md_size": 0, 00:15:41.028 "dif_type": 0, 00:15:41.028 "dif_is_head_of_md": false, 00:15:41.028 "dif_pi_format": 0 00:15:41.028 } 00:15:41.028 }, 00:15:41.028 { 00:15:41.029 "method": "bdev_wait_for_examine" 00:15:41.029 } 00:15:41.029 ] 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "subsystem": "nbd", 00:15:41.029 "config": [] 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "subsystem": "scheduler", 00:15:41.029 "config": [ 00:15:41.029 { 00:15:41.029 "method": "framework_set_scheduler", 00:15:41.029 "params": { 00:15:41.029 "name": "static" 00:15:41.029 } 00:15:41.029 } 00:15:41.029 ] 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "subsystem": "nvmf", 00:15:41.029 "config": [ 00:15:41.029 { 00:15:41.029 "method": "nvmf_set_config", 00:15:41.029 "params": { 00:15:41.029 "discovery_filter": "match_any", 00:15:41.029 "admin_cmd_passthru": { 00:15:41.029 "identify_ctrlr": false 00:15:41.029 }, 00:15:41.029 "dhchap_digests": [ 00:15:41.029 "sha256", 00:15:41.029 "sha384", 00:15:41.029 "sha512" 00:15:41.029 ], 00:15:41.029 "dhchap_dhgroups": [ 00:15:41.029 "null", 00:15:41.029 "ffdhe2048", 00:15:41.029 "ffdhe3072", 00:15:41.029 "ffdhe4096", 00:15:41.029 "ffdhe6144", 00:15:41.029 "ffdhe8192" 00:15:41.029 ] 00:15:41.029 } 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "method": "nvmf_set_max_subsystems", 00:15:41.029 "params": { 00:15:41.029 "max_subsystems": 1024 00:15:41.029 } 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "method": "nvmf_set_crdt", 00:15:41.029 "params": { 00:15:41.029 "crdt1": 0, 00:15:41.029 "crdt2": 0, 00:15:41.029 "crdt3": 0 00:15:41.029 } 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "method": "nvmf_create_transport", 00:15:41.029 "params": { 00:15:41.029 "trtype": "TCP", 00:15:41.029 "max_queue_depth": 128, 00:15:41.029 "max_io_qpairs_per_ctrlr": 127, 00:15:41.029 "in_capsule_data_size": 4096, 00:15:41.029 "max_io_size": 131072, 00:15:41.029 "io_unit_size": 131072, 00:15:41.029 "max_aq_depth": 128, 00:15:41.029 "num_shared_buffers": 511, 00:15:41.029 "buf_cache_size": 4294967295, 00:15:41.029 "dif_insert_or_strip": false, 00:15:41.029 "zcopy": false, 00:15:41.029 "c2h_success": false, 00:15:41.029 "sock_priority": 0, 00:15:41.029 "abort_timeout_sec": 1, 00:15:41.029 "ack_timeout": 0, 00:15:41.029 "data_wr_pool_size": 0 00:15:41.029 } 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "method": "nvmf_create_subsystem", 00:15:41.029 "params": { 00:15:41.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.029 "allow_any_host": false, 00:15:41.029 "serial_number": "SPDK00000000000001", 00:15:41.029 "model_number": "SPDK bdev Controller", 00:15:41.029 "max_namespaces": 10, 00:15:41.029 "min_cntlid": 1, 00:15:41.029 "max_cntlid": 65519, 00:15:41.029 "ana_reporting": false 00:15:41.029 } 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "method": "nvmf_subsystem_add_host", 00:15:41.029 "params": { 00:15:41.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.029 "host": "nqn.2016-06.io.spdk:host1", 00:15:41.029 "psk": "key0" 00:15:41.029 } 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "method": "nvmf_subsystem_add_ns", 00:15:41.029 "params": { 00:15:41.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.029 "namespace": { 00:15:41.029 "nsid": 1, 00:15:41.029 "bdev_name": "malloc0", 00:15:41.029 "nguid": "A6C986A3BA9A4827B0EFE114C725A30B", 00:15:41.029 "uuid": "a6c986a3-ba9a-4827-b0ef-e114c725a30b", 00:15:41.029 "no_auto_visible": false 00:15:41.029 } 00:15:41.029 } 00:15:41.029 }, 00:15:41.029 { 00:15:41.029 "method": "nvmf_subsystem_add_listener", 00:15:41.029 "params": { 00:15:41.029 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.029 "listen_address": { 00:15:41.029 "trtype": "TCP", 00:15:41.029 "adrfam": "IPv4", 00:15:41.029 "traddr": "10.0.0.3", 00:15:41.029 "trsvcid": "4420" 00:15:41.029 }, 00:15:41.029 "secure_channel": true 00:15:41.029 } 00:15:41.029 } 00:15:41.029 ] 00:15:41.029 } 00:15:41.029 ] 00:15:41.029 }' 00:15:41.029 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:41.289 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:41.289 "subsystems": [ 00:15:41.289 { 00:15:41.289 "subsystem": "keyring", 00:15:41.289 "config": [ 00:15:41.289 { 00:15:41.289 "method": "keyring_file_add_key", 00:15:41.289 "params": { 00:15:41.289 "name": "key0", 00:15:41.289 "path": "/tmp/tmp.2AKhlA2Ph1" 00:15:41.289 } 00:15:41.289 } 00:15:41.289 ] 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "subsystem": "iobuf", 00:15:41.289 "config": [ 00:15:41.289 { 00:15:41.289 "method": "iobuf_set_options", 00:15:41.289 "params": { 00:15:41.289 "small_pool_count": 8192, 00:15:41.289 "large_pool_count": 1024, 00:15:41.289 "small_bufsize": 8192, 00:15:41.289 "large_bufsize": 135168, 00:15:41.289 "enable_numa": false 00:15:41.289 } 00:15:41.289 } 00:15:41.289 ] 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "subsystem": "sock", 00:15:41.289 "config": [ 00:15:41.289 { 00:15:41.289 "method": "sock_set_default_impl", 00:15:41.289 "params": { 00:15:41.289 "impl_name": "uring" 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "sock_impl_set_options", 00:15:41.289 "params": { 00:15:41.289 "impl_name": "ssl", 00:15:41.289 "recv_buf_size": 4096, 00:15:41.289 "send_buf_size": 4096, 00:15:41.289 "enable_recv_pipe": true, 00:15:41.289 "enable_quickack": false, 00:15:41.289 "enable_placement_id": 0, 00:15:41.289 "enable_zerocopy_send_server": true, 00:15:41.289 "enable_zerocopy_send_client": false, 00:15:41.289 "zerocopy_threshold": 0, 00:15:41.289 "tls_version": 0, 00:15:41.289 "enable_ktls": false 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "sock_impl_set_options", 00:15:41.289 "params": { 00:15:41.289 "impl_name": "posix", 00:15:41.289 "recv_buf_size": 2097152, 00:15:41.289 "send_buf_size": 2097152, 00:15:41.289 "enable_recv_pipe": true, 00:15:41.289 "enable_quickack": false, 00:15:41.289 "enable_placement_id": 0, 00:15:41.289 "enable_zerocopy_send_server": true, 00:15:41.289 "enable_zerocopy_send_client": false, 00:15:41.289 "zerocopy_threshold": 0, 00:15:41.289 "tls_version": 0, 00:15:41.289 "enable_ktls": false 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "sock_impl_set_options", 00:15:41.289 "params": { 00:15:41.289 "impl_name": "uring", 00:15:41.289 "recv_buf_size": 2097152, 00:15:41.289 "send_buf_size": 2097152, 00:15:41.289 "enable_recv_pipe": true, 00:15:41.289 "enable_quickack": false, 00:15:41.289 "enable_placement_id": 0, 00:15:41.289 "enable_zerocopy_send_server": false, 00:15:41.289 "enable_zerocopy_send_client": false, 00:15:41.289 "zerocopy_threshold": 0, 00:15:41.289 "tls_version": 0, 00:15:41.289 "enable_ktls": false 00:15:41.289 } 00:15:41.289 } 00:15:41.289 ] 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "subsystem": "vmd", 00:15:41.289 "config": [] 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "subsystem": "accel", 00:15:41.289 "config": [ 00:15:41.289 { 00:15:41.289 "method": "accel_set_options", 00:15:41.289 "params": { 00:15:41.289 "small_cache_size": 128, 00:15:41.289 "large_cache_size": 16, 00:15:41.289 "task_count": 2048, 00:15:41.289 "sequence_count": 2048, 00:15:41.289 "buf_count": 2048 00:15:41.289 } 00:15:41.289 } 00:15:41.289 ] 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "subsystem": "bdev", 00:15:41.289 "config": [ 00:15:41.289 { 00:15:41.289 "method": "bdev_set_options", 00:15:41.289 "params": { 00:15:41.289 "bdev_io_pool_size": 65535, 00:15:41.289 "bdev_io_cache_size": 256, 00:15:41.289 "bdev_auto_examine": true, 00:15:41.289 "iobuf_small_cache_size": 128, 00:15:41.289 "iobuf_large_cache_size": 16 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "bdev_raid_set_options", 00:15:41.289 "params": { 00:15:41.289 "process_window_size_kb": 1024, 00:15:41.289 "process_max_bandwidth_mb_sec": 0 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "bdev_iscsi_set_options", 00:15:41.289 "params": { 00:15:41.289 "timeout_sec": 30 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "bdev_nvme_set_options", 00:15:41.289 "params": { 00:15:41.289 "action_on_timeout": "none", 00:15:41.289 "timeout_us": 0, 00:15:41.289 "timeout_admin_us": 0, 00:15:41.289 "keep_alive_timeout_ms": 10000, 00:15:41.289 "arbitration_burst": 0, 00:15:41.289 "low_priority_weight": 0, 00:15:41.289 "medium_priority_weight": 0, 00:15:41.289 "high_priority_weight": 0, 00:15:41.289 "nvme_adminq_poll_period_us": 10000, 00:15:41.289 "nvme_ioq_poll_period_us": 0, 00:15:41.289 "io_queue_requests": 512, 00:15:41.289 "delay_cmd_submit": true, 00:15:41.289 "transport_retry_count": 4, 00:15:41.289 "bdev_retry_count": 3, 00:15:41.289 "transport_ack_timeout": 0, 00:15:41.289 "ctrlr_loss_timeout_sec": 0, 00:15:41.289 "reconnect_delay_sec": 0, 00:15:41.289 "fast_io_fail_timeout_sec": 0, 00:15:41.289 "disable_auto_failback": false, 00:15:41.289 "generate_uuids": false, 00:15:41.289 "transport_tos": 0, 00:15:41.289 "nvme_error_stat": false, 00:15:41.289 "rdma_srq_size": 0, 00:15:41.289 "io_path_stat": false, 00:15:41.289 "allow_accel_sequence": false, 00:15:41.289 "rdma_max_cq_size": 0, 00:15:41.289 "rdma_cm_event_timeout_ms": 0, 00:15:41.289 "dhchap_digests": [ 00:15:41.289 "sha256", 00:15:41.289 "sha384", 00:15:41.289 "sha512" 00:15:41.289 ], 00:15:41.289 "dhchap_dhgroups": [ 00:15:41.289 "null", 00:15:41.289 "ffdhe2048", 00:15:41.289 "ffdhe3072", 00:15:41.289 "ffdhe4096", 00:15:41.289 "ffdhe6144", 00:15:41.289 "ffdhe8192" 00:15:41.289 ], 00:15:41.289 "rdma_umr_per_io": false 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "bdev_nvme_attach_controller", 00:15:41.289 "params": { 00:15:41.289 "name": "TLSTEST", 00:15:41.289 "trtype": "TCP", 00:15:41.289 "adrfam": "IPv4", 00:15:41.289 "traddr": "10.0.0.3", 00:15:41.289 "trsvcid": "4420", 00:15:41.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.289 "prchk_reftag": false, 00:15:41.289 "prchk_guard": false, 00:15:41.289 "ctrlr_loss_timeout_sec": 0, 00:15:41.289 "reconnect_delay_sec": 0, 00:15:41.289 "fast_io_fail_timeout_sec": 0, 00:15:41.289 "psk": "key0", 00:15:41.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:41.289 "hdgst": false, 00:15:41.289 "ddgst": false, 00:15:41.289 "multipath": "multipath" 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "bdev_nvme_set_hotplug", 00:15:41.289 "params": { 00:15:41.289 "period_us": 100000, 00:15:41.289 "enable": false 00:15:41.289 } 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "method": "bdev_wait_for_examine" 00:15:41.289 } 00:15:41.289 ] 00:15:41.289 }, 00:15:41.289 { 00:15:41.289 "subsystem": "nbd", 00:15:41.289 "config": [] 00:15:41.290 } 00:15:41.290 ] 00:15:41.290 }' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 86625 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86625 ']' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86625 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86625 00:15:41.290 killing process with pid 86625 00:15:41.290 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.290 00:15:41.290 Latency(us) 00:15:41.290 [2024-12-16T01:37:11.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.290 [2024-12-16T01:37:11.948Z] =================================================================================================================== 00:15:41.290 [2024-12-16T01:37:11.948Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86625' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86625 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86625 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 86577 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86577 ']' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86577 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86577 00:15:41.290 killing process with pid 86577 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86577' 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86577 00:15:41.290 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86577 00:15:41.550 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:41.550 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:41.550 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:41.550 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.550 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:41.550 "subsystems": [ 00:15:41.550 { 00:15:41.550 "subsystem": "keyring", 00:15:41.550 "config": [ 00:15:41.550 { 00:15:41.550 "method": "keyring_file_add_key", 00:15:41.550 "params": { 00:15:41.550 "name": "key0", 00:15:41.550 "path": "/tmp/tmp.2AKhlA2Ph1" 00:15:41.550 } 00:15:41.550 } 00:15:41.550 ] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "iobuf", 00:15:41.550 "config": [ 00:15:41.550 { 00:15:41.550 "method": "iobuf_set_options", 00:15:41.550 "params": { 00:15:41.550 "small_pool_count": 8192, 00:15:41.550 "large_pool_count": 1024, 00:15:41.550 "small_bufsize": 8192, 00:15:41.550 "large_bufsize": 135168, 00:15:41.550 "enable_numa": false 00:15:41.550 } 00:15:41.550 } 00:15:41.550 ] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "sock", 00:15:41.550 "config": [ 00:15:41.550 { 00:15:41.550 "method": "sock_set_default_impl", 00:15:41.550 "params": { 00:15:41.550 "impl_name": "uring" 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "sock_impl_set_options", 00:15:41.550 "params": { 00:15:41.550 "impl_name": "ssl", 00:15:41.550 "recv_buf_size": 4096, 00:15:41.550 "send_buf_size": 4096, 00:15:41.550 "enable_recv_pipe": true, 00:15:41.550 "enable_quickack": false, 00:15:41.550 "enable_placement_id": 0, 00:15:41.550 "enable_zerocopy_send_server": true, 00:15:41.550 "enable_zerocopy_send_client": false, 00:15:41.550 "zerocopy_threshold": 0, 00:15:41.550 "tls_version": 0, 00:15:41.550 "enable_ktls": false 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "sock_impl_set_options", 00:15:41.550 "params": { 00:15:41.550 "impl_name": "posix", 00:15:41.550 "recv_buf_size": 2097152, 00:15:41.550 "send_buf_size": 2097152, 00:15:41.550 "enable_recv_pipe": true, 00:15:41.550 "enable_quickack": false, 00:15:41.550 "enable_placement_id": 0, 00:15:41.550 "enable_zerocopy_send_server": true, 00:15:41.550 "enable_zerocopy_send_client": false, 00:15:41.550 "zerocopy_threshold": 0, 00:15:41.550 "tls_version": 0, 00:15:41.550 "enable_ktls": false 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "sock_impl_set_options", 00:15:41.550 "params": { 00:15:41.550 "impl_name": "uring", 00:15:41.550 "recv_buf_size": 2097152, 00:15:41.550 "send_buf_size": 2097152, 00:15:41.550 "enable_recv_pipe": true, 00:15:41.550 "enable_quickack": false, 00:15:41.550 "enable_placement_id": 0, 00:15:41.550 "enable_zerocopy_send_server": false, 00:15:41.550 "enable_zerocopy_send_client": false, 00:15:41.550 "zerocopy_threshold": 0, 00:15:41.550 "tls_version": 0, 00:15:41.550 "enable_ktls": false 00:15:41.550 } 00:15:41.550 } 00:15:41.550 ] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "vmd", 00:15:41.550 "config": [] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "accel", 00:15:41.550 "config": [ 00:15:41.550 { 00:15:41.550 "method": "accel_set_options", 00:15:41.550 "params": { 00:15:41.550 "small_cache_size": 128, 00:15:41.550 "large_cache_size": 16, 00:15:41.550 "task_count": 2048, 00:15:41.550 "sequence_count": 2048, 00:15:41.550 "buf_count": 2048 00:15:41.550 } 00:15:41.550 } 00:15:41.550 ] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "bdev", 00:15:41.550 "config": [ 00:15:41.550 { 00:15:41.550 "method": "bdev_set_options", 00:15:41.550 "params": { 00:15:41.550 "bdev_io_pool_size": 65535, 00:15:41.550 "bdev_io_cache_size": 256, 00:15:41.550 "bdev_auto_examine": true, 00:15:41.550 "iobuf_small_cache_size": 128, 00:15:41.550 "iobuf_large_cache_size": 16 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "bdev_raid_set_options", 00:15:41.550 "params": { 00:15:41.550 "process_window_size_kb": 1024, 00:15:41.550 "process_max_bandwidth_mb_sec": 0 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "bdev_iscsi_set_options", 00:15:41.550 "params": { 00:15:41.550 "timeout_sec": 30 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "bdev_nvme_set_options", 00:15:41.550 "params": { 00:15:41.550 "action_on_timeout": "none", 00:15:41.550 "timeout_us": 0, 00:15:41.550 "timeout_admin_us": 0, 00:15:41.550 "keep_alive_timeout_ms": 10000, 00:15:41.550 "arbitration_burst": 0, 00:15:41.550 "low_priority_weight": 0, 00:15:41.550 "medium_priority_weight": 0, 00:15:41.550 "high_priority_weight": 0, 00:15:41.550 "nvme_adminq_poll_period_us": 10000, 00:15:41.550 "nvme_ioq_poll_period_us": 0, 00:15:41.550 "io_queue_requests": 0, 00:15:41.550 "delay_cmd_submit": true, 00:15:41.550 "transport_retry_count": 4, 00:15:41.550 "bdev_retry_count": 3, 00:15:41.550 "transport_ack_timeout": 0, 00:15:41.550 "ctrlr_loss_timeout_sec": 0, 00:15:41.550 "reconnect_delay_sec": 0, 00:15:41.550 "fast_io_fail_timeout_sec": 0, 00:15:41.550 "disable_auto_failback": false, 00:15:41.550 "generate_uuids": false, 00:15:41.550 "transport_tos": 0, 00:15:41.550 "nvme_error_stat": false, 00:15:41.550 "rdma_srq_size": 0, 00:15:41.550 "io_path_stat": false, 00:15:41.550 "allow_accel_sequence": false, 00:15:41.550 "rdma_max_cq_size": 0, 00:15:41.550 "rdma_cm_event_timeout_ms": 0, 00:15:41.550 "dhchap_digests": [ 00:15:41.550 "sha256", 00:15:41.550 "sha384", 00:15:41.550 "sha512" 00:15:41.550 ], 00:15:41.550 "dhchap_dhgroups": [ 00:15:41.550 "null", 00:15:41.550 "ffdhe2048", 00:15:41.550 "ffdhe3072", 00:15:41.550 "ffdhe4096", 00:15:41.550 "ffdhe6144", 00:15:41.550 "ffdhe8192" 00:15:41.550 ], 00:15:41.550 "rdma_umr_per_io": false 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "bdev_nvme_set_hotplug", 00:15:41.550 "params": { 00:15:41.550 "period_us": 100000, 00:15:41.550 "enable": false 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "bdev_malloc_create", 00:15:41.550 "params": { 00:15:41.550 "name": "malloc0", 00:15:41.550 "num_blocks": 8192, 00:15:41.550 "block_size": 4096, 00:15:41.550 "physical_block_size": 4096, 00:15:41.550 "uuid": "a6c986a3-ba9a-4827-b0ef-e114c725a30b", 00:15:41.550 "optimal_io_boundary": 0, 00:15:41.550 "md_size": 0, 00:15:41.550 "dif_type": 0, 00:15:41.550 "dif_is_head_of_md": false, 00:15:41.550 "dif_pi_format": 0 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "bdev_wait_for_examine" 00:15:41.550 } 00:15:41.550 ] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "nbd", 00:15:41.550 "config": [] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "scheduler", 00:15:41.550 "config": [ 00:15:41.550 { 00:15:41.550 "method": "framework_set_scheduler", 00:15:41.550 "params": { 00:15:41.550 "name": "static" 00:15:41.550 } 00:15:41.550 } 00:15:41.550 ] 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "subsystem": "nvmf", 00:15:41.550 "config": [ 00:15:41.550 { 00:15:41.550 "method": "nvmf_set_config", 00:15:41.550 "params": { 00:15:41.550 "discovery_filter": "match_any", 00:15:41.550 "admin_cmd_passthru": { 00:15:41.550 "identify_ctrlr": false 00:15:41.550 }, 00:15:41.550 "dhchap_digests": [ 00:15:41.550 "sha256", 00:15:41.550 "sha384", 00:15:41.550 "sha512" 00:15:41.550 ], 00:15:41.550 "dhchap_dhgroups": [ 00:15:41.550 "null", 00:15:41.550 "ffdhe2048", 00:15:41.550 "ffdhe3072", 00:15:41.550 "ffdhe4096", 00:15:41.550 "ffdhe6144", 00:15:41.550 "ffdhe8192" 00:15:41.550 ] 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.550 "method": "nvmf_set_max_subsystems", 00:15:41.550 "params": { 00:15:41.550 "max_subsystems": 1024 00:15:41.550 } 00:15:41.550 }, 00:15:41.550 { 00:15:41.551 "method": "nvmf_set_crdt", 00:15:41.551 "params": { 00:15:41.551 "crdt1": 0, 00:15:41.551 "crdt2": 0, 00:15:41.551 "crdt3": 0 00:15:41.551 } 00:15:41.551 }, 00:15:41.551 { 00:15:41.551 "method": "nvmf_create_transport", 00:15:41.551 "params": { 00:15:41.551 "trtype": "TCP", 00:15:41.551 "max_queue_depth": 128, 00:15:41.551 "max_io_qpairs_per_ctrlr": 127, 00:15:41.551 "in_capsule_data_size": 4096, 00:15:41.551 "max_io_size": 131072, 00:15:41.551 "io_unit_size": 131072, 00:15:41.551 "max_aq_depth": 128, 00:15:41.551 "num_shared_buffers": 511, 00:15:41.551 "buf_cache_size": 4294967295, 00:15:41.551 "dif_insert_or_strip": false, 00:15:41.551 "zcopy": false, 00:15:41.551 "c2h_success": false, 00:15:41.551 "sock_priority": 0, 00:15:41.551 "abort_timeout_sec": 1, 00:15:41.551 "ack_timeout": 0, 00:15:41.551 "data_wr_pool_size": 0 00:15:41.551 } 00:15:41.551 }, 00:15:41.551 { 00:15:41.551 "method": "nvmf_create_subsystem", 00:15:41.551 "params": { 00:15:41.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.551 "allow_any_host": false, 00:15:41.551 "serial_number": "SPDK00000000000001", 00:15:41.551 "model_number": "SPDK bdev Controller", 00:15:41.551 "max_namespaces": 10, 00:15:41.551 "min_cntlid": 1, 00:15:41.551 "max_cntlid": 65519, 00:15:41.551 "ana_reporting": false 00:15:41.551 } 00:15:41.551 }, 00:15:41.551 { 00:15:41.551 "method": "nvmf_subsystem_add_host", 00:15:41.551 "params": { 00:15:41.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.551 "host": "nqn.2016-06.io.spdk:host1", 00:15:41.551 "psk": "key0" 00:15:41.551 } 00:15:41.551 }, 00:15:41.551 { 00:15:41.551 "method": "nvmf_subsystem_add_ns", 00:15:41.551 "params": { 00:15:41.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.551 "namespace": { 00:15:41.551 "nsid": 1, 00:15:41.551 "bdev_name": "malloc0", 00:15:41.551 "nguid": "A6C986A3BA9A4827B0EFE114C725A30B", 00:15:41.551 "uuid": "a6c986a3-ba9a-4827-b0ef-e114c725a30b", 00:15:41.551 "no_auto_visible": false 00:15:41.551 } 00:15:41.551 } 00:15:41.551 }, 00:15:41.551 { 00:15:41.551 "method": "nvmf_subsystem_add_listener", 00:15:41.551 "params": { 00:15:41.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:41.551 "listen_address": { 00:15:41.551 "trtype": "TCP", 00:15:41.551 "adrfam": "IPv4", 00:15:41.551 "traddr": "10.0.0.3", 00:15:41.551 "trsvcid": "4420" 00:15:41.551 }, 00:15:41.551 "secure_channel": true 00:15:41.551 } 00:15:41.551 } 00:15:41.551 ] 00:15:41.551 } 00:15:41.551 ] 00:15:41.551 }' 00:15:41.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86669 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86669 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86669 ']' 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.551 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.551 [2024-12-16 01:37:12.128541] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:41.551 [2024-12-16 01:37:12.129143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.810 [2024-12-16 01:37:12.274369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.810 [2024-12-16 01:37:12.292659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.810 [2024-12-16 01:37:12.292933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.810 [2024-12-16 01:37:12.293085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.810 [2024-12-16 01:37:12.293139] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.810 [2024-12-16 01:37:12.293242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.810 [2024-12-16 01:37:12.293599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.810 [2024-12-16 01:37:12.435740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.070 [2024-12-16 01:37:12.489446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.070 [2024-12-16 01:37:12.521400] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:42.070 [2024-12-16 01:37:12.521644] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=86701 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 86701 /var/tmp/bdevperf.sock 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86701 ']' 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.639 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:42.639 "subsystems": [ 00:15:42.639 { 00:15:42.639 "subsystem": "keyring", 00:15:42.639 "config": [ 00:15:42.639 { 00:15:42.639 "method": "keyring_file_add_key", 00:15:42.639 "params": { 00:15:42.639 "name": "key0", 00:15:42.639 "path": "/tmp/tmp.2AKhlA2Ph1" 00:15:42.639 } 00:15:42.639 } 00:15:42.639 ] 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "subsystem": "iobuf", 00:15:42.639 "config": [ 00:15:42.639 { 00:15:42.639 "method": "iobuf_set_options", 00:15:42.639 "params": { 00:15:42.639 "small_pool_count": 8192, 00:15:42.639 "large_pool_count": 1024, 00:15:42.639 "small_bufsize": 8192, 00:15:42.639 "large_bufsize": 135168, 00:15:42.639 "enable_numa": false 00:15:42.639 } 00:15:42.639 } 00:15:42.639 ] 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "subsystem": "sock", 00:15:42.639 "config": [ 00:15:42.639 { 00:15:42.639 "method": "sock_set_default_impl", 00:15:42.639 "params": { 00:15:42.639 "impl_name": "uring" 00:15:42.639 } 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "method": "sock_impl_set_options", 00:15:42.639 "params": { 00:15:42.639 "impl_name": "ssl", 00:15:42.639 "recv_buf_size": 4096, 00:15:42.639 "send_buf_size": 4096, 00:15:42.639 "enable_recv_pipe": true, 00:15:42.639 "enable_quickack": false, 00:15:42.639 "enable_placement_id": 0, 00:15:42.639 "enable_zerocopy_send_server": true, 00:15:42.639 "enable_zerocopy_send_client": false, 00:15:42.639 "zerocopy_threshold": 0, 00:15:42.639 "tls_version": 0, 00:15:42.639 "enable_ktls": false 00:15:42.639 } 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "method": "sock_impl_set_options", 00:15:42.639 "params": { 00:15:42.639 "impl_name": "posix", 00:15:42.639 "recv_buf_size": 2097152, 00:15:42.639 "send_buf_size": 2097152, 00:15:42.639 "enable_recv_pipe": true, 00:15:42.639 "enable_quickack": false, 00:15:42.639 "enable_placement_id": 0, 00:15:42.639 "enable_zerocopy_send_server": true, 00:15:42.639 "enable_zerocopy_send_client": false, 00:15:42.639 "zerocopy_threshold": 0, 00:15:42.639 "tls_version": 0, 00:15:42.639 "enable_ktls": false 00:15:42.639 } 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "method": "sock_impl_set_options", 00:15:42.639 "params": { 00:15:42.639 "impl_name": "uring", 00:15:42.639 "recv_buf_size": 2097152, 00:15:42.639 "send_buf_size": 2097152, 00:15:42.639 "enable_recv_pipe": true, 00:15:42.639 "enable_quickack": false, 00:15:42.639 "enable_placement_id": 0, 00:15:42.639 "enable_zerocopy_send_server": false, 00:15:42.639 "enable_zerocopy_send_client": false, 00:15:42.639 "zerocopy_threshold": 0, 00:15:42.639 "tls_version": 0, 00:15:42.639 "enable_ktls": false 00:15:42.639 } 00:15:42.639 } 00:15:42.639 ] 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "subsystem": "vmd", 00:15:42.639 "config": [] 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "subsystem": "accel", 00:15:42.639 "config": [ 00:15:42.639 { 00:15:42.639 "method": "accel_set_options", 00:15:42.639 "params": { 00:15:42.639 "small_cache_size": 128, 00:15:42.639 "large_cache_size": 16, 00:15:42.639 "task_count": 2048, 00:15:42.639 "sequence_count": 2048, 00:15:42.639 "buf_count": 2048 00:15:42.639 } 00:15:42.639 } 00:15:42.639 ] 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "subsystem": "bdev", 00:15:42.639 "config": [ 00:15:42.639 { 00:15:42.639 "method": "bdev_set_options", 00:15:42.639 "params": { 00:15:42.639 "bdev_io_pool_size": 65535, 00:15:42.639 "bdev_io_cache_size": 256, 00:15:42.639 "bdev_auto_examine": true, 00:15:42.639 "iobuf_small_cache_size": 128, 00:15:42.639 "iobuf_large_cache_size": 16 00:15:42.639 } 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "method": "bdev_raid_set_options", 00:15:42.639 "params": { 00:15:42.639 "process_window_size_kb": 1024, 00:15:42.639 "process_max_bandwidth_mb_sec": 0 00:15:42.639 } 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "method": "bdev_iscsi_set_options", 00:15:42.639 "params": { 00:15:42.639 "timeout_sec": 30 00:15:42.639 } 00:15:42.639 }, 00:15:42.639 { 00:15:42.639 "method": "bdev_nvme_set_options", 00:15:42.639 "params": { 00:15:42.639 "action_on_timeout": "none", 00:15:42.639 "timeout_us": 0, 00:15:42.639 "timeout_admin_us": 0, 00:15:42.639 "keep_alive_timeout_ms": 10000, 00:15:42.639 "arbitration_burst": 0, 00:15:42.640 "low_priority_weight": 0, 00:15:42.640 "medium_priority_weight": 0, 00:15:42.640 "high_priority_weight": 0, 00:15:42.640 "nvme_adminq_poll_period_us": 10000, 00:15:42.640 "nvme_ioq_poll_period_us": 0, 00:15:42.640 "io_queue_requests": 512, 00:15:42.640 "delay_cmd_submit": true, 00:15:42.640 "transport_retry_count": 4, 00:15:42.640 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:42.640 "bdev_retry_count": 3, 00:15:42.640 "transport_ack_timeout": 0, 00:15:42.640 "ctrlr_loss_timeout_sec": 0, 00:15:42.640 "reconnect_delay_sec": 0, 00:15:42.640 "fast_io_fail_timeout_sec": 0, 00:15:42.640 "disable_auto_failback": false, 00:15:42.640 "generate_uuids": false, 00:15:42.640 "transport_tos": 0, 00:15:42.640 "nvme_error_stat": false, 00:15:42.640 "rdma_srq_size": 0, 00:15:42.640 "io_path_stat": false, 00:15:42.640 "allow_accel_sequence": false, 00:15:42.640 "rdma_max_cq_size": 0, 00:15:42.640 "rdma_cm_event_timeout_ms": 0, 00:15:42.640 "dhchap_digests": [ 00:15:42.640 "sha256", 00:15:42.640 "sha384", 00:15:42.640 "sha512" 00:15:42.640 ], 00:15:42.640 "dhchap_dhgroups": [ 00:15:42.640 "null", 00:15:42.640 "ffdhe2048", 00:15:42.640 "ffdhe3072", 00:15:42.640 "ffdhe4096", 00:15:42.640 "ffdhe6144", 00:15:42.640 "ffdhe8192" 00:15:42.640 ], 00:15:42.640 "rdma_umr_per_io": false 00:15:42.640 } 00:15:42.640 }, 00:15:42.640 { 00:15:42.640 "method": "bdev_nvme_attach_controller", 00:15:42.640 "params": { 00:15:42.640 "name": "TLSTEST", 00:15:42.640 "trtype": "TCP", 00:15:42.640 "adrfam": "IPv4", 00:15:42.640 "traddr": "10.0.0.3", 00:15:42.640 "trsvcid": "4420", 00:15:42.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:42.640 "prchk_reftag": false, 00:15:42.640 "prchk_guard": false, 00:15:42.640 "ctrlr_loss_timeout_sec": 0, 00:15:42.640 "reconnect_delay_sec": 0, 00:15:42.640 "fast_io_fail_timeout_sec": 0, 00:15:42.640 "psk": "key0", 00:15:42.640 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:42.640 "hdgst": false, 00:15:42.640 "ddgst": false, 00:15:42.640 "multipath": "multipath" 00:15:42.640 } 00:15:42.640 }, 00:15:42.640 { 00:15:42.640 "method": "bdev_nvme_set_hotplug", 00:15:42.640 "params": { 00:15:42.640 "period_us": 100000, 00:15:42.640 "enable": false 00:15:42.640 } 00:15:42.640 }, 00:15:42.640 { 00:15:42.640 "method": "bdev_wait_for_examine" 00:15:42.640 } 00:15:42.640 ] 00:15:42.640 }, 00:15:42.640 { 00:15:42.640 "subsystem": "nbd", 00:15:42.640 "config": [] 00:15:42.640 } 00:15:42.640 ] 00:15:42.640 }' 00:15:42.640 [2024-12-16 01:37:13.235742] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:42.640 [2024-12-16 01:37:13.236033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86701 ] 00:15:42.899 [2024-12-16 01:37:13.386603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.899 [2024-12-16 01:37:13.411358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.899 [2024-12-16 01:37:13.525777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:43.158 [2024-12-16 01:37:13.558223] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:43.726 01:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.726 01:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:43.726 01:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:43.726 Running I/O for 10 seconds... 00:15:46.038 4361.00 IOPS, 17.04 MiB/s [2024-12-16T01:37:17.633Z] 4316.50 IOPS, 16.86 MiB/s [2024-12-16T01:37:18.572Z] 4370.00 IOPS, 17.07 MiB/s [2024-12-16T01:37:19.509Z] 4347.75 IOPS, 16.98 MiB/s [2024-12-16T01:37:20.472Z] 4314.60 IOPS, 16.85 MiB/s [2024-12-16T01:37:21.408Z] 4281.50 IOPS, 16.72 MiB/s [2024-12-16T01:37:22.786Z] 4260.00 IOPS, 16.64 MiB/s [2024-12-16T01:37:23.724Z] 4242.75 IOPS, 16.57 MiB/s [2024-12-16T01:37:24.663Z] 4230.00 IOPS, 16.52 MiB/s [2024-12-16T01:37:24.663Z] 4219.60 IOPS, 16.48 MiB/s 00:15:54.005 Latency(us) 00:15:54.005 [2024-12-16T01:37:24.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.005 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:54.005 Verification LBA range: start 0x0 length 0x2000 00:15:54.005 TLSTESTn1 : 10.01 4225.30 16.51 0.00 0.00 30239.82 4647.10 27048.49 00:15:54.005 [2024-12-16T01:37:24.663Z] =================================================================================================================== 00:15:54.005 [2024-12-16T01:37:24.663Z] Total : 4225.30 16.51 0.00 0.00 30239.82 4647.10 27048.49 00:15:54.005 { 00:15:54.005 "results": [ 00:15:54.005 { 00:15:54.005 "job": "TLSTESTn1", 00:15:54.005 "core_mask": "0x4", 00:15:54.005 "workload": "verify", 00:15:54.005 "status": "finished", 00:15:54.005 "verify_range": { 00:15:54.005 "start": 0, 00:15:54.005 "length": 8192 00:15:54.005 }, 00:15:54.005 "queue_depth": 128, 00:15:54.005 "io_size": 4096, 00:15:54.005 "runtime": 10.014919, 00:15:54.005 "iops": 4225.296280479153, 00:15:54.005 "mibps": 16.505063595621692, 00:15:54.005 "io_failed": 0, 00:15:54.005 "io_timeout": 0, 00:15:54.005 "avg_latency_us": 30239.81880535194, 00:15:54.005 "min_latency_us": 4647.098181818182, 00:15:54.005 "max_latency_us": 27048.494545454545 00:15:54.005 } 00:15:54.005 ], 00:15:54.005 "core_count": 1 00:15:54.005 } 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 86701 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86701 ']' 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86701 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86701 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:54.005 killing process with pid 86701 00:15:54.005 Received shutdown signal, test time was about 10.000000 seconds 00:15:54.005 00:15:54.005 Latency(us) 00:15:54.005 [2024-12-16T01:37:24.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.005 [2024-12-16T01:37:24.663Z] =================================================================================================================== 00:15:54.005 [2024-12-16T01:37:24.663Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86701' 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86701 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86701 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 86669 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86669 ']' 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86669 00:15:54.005 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:54.006 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.006 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86669 00:15:54.006 killing process with pid 86669 00:15:54.006 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:54.006 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:54.006 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86669' 00:15:54.006 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86669 00:15:54.006 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86669 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86840 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86840 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86840 ']' 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.266 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.266 [2024-12-16 01:37:24.826752] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:54.266 [2024-12-16 01:37:24.827044] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.525 [2024-12-16 01:37:24.983459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.525 [2024-12-16 01:37:25.007994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.525 [2024-12-16 01:37:25.008295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.525 [2024-12-16 01:37:25.008467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.525 [2024-12-16 01:37:25.008754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.525 [2024-12-16 01:37:25.008946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.525 [2024-12-16 01:37:25.009345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.525 [2024-12-16 01:37:25.044969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.2AKhlA2Ph1 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.2AKhlA2Ph1 00:15:54.525 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:54.784 [2024-12-16 01:37:25.421834] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.043 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:55.043 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:55.612 [2024-12-16 01:37:25.978059] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:55.612 [2024-12-16 01:37:25.978560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.612 01:37:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:55.612 malloc0 00:15:55.612 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:55.871 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:56.130 01:37:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:56.389 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=86888 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 86888 /var/tmp/bdevperf.sock 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86888 ']' 00:15:56.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.390 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.649 [2024-12-16 01:37:27.102005] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:56.649 [2024-12-16 01:37:27.102325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86888 ] 00:15:56.649 [2024-12-16 01:37:27.255719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.649 [2024-12-16 01:37:27.281349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.908 [2024-12-16 01:37:27.317189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:56.908 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.908 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:56.908 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:15:57.167 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:57.426 [2024-12-16 01:37:27.869487] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:57.426 nvme0n1 00:15:57.426 01:37:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:57.426 Running I/O for 1 seconds... 00:15:58.805 3840.00 IOPS, 15.00 MiB/s 00:15:58.805 Latency(us) 00:15:58.805 [2024-12-16T01:37:29.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.805 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:58.805 Verification LBA range: start 0x0 length 0x2000 00:15:58.805 nvme0n1 : 1.02 3885.15 15.18 0.00 0.00 32603.41 7060.01 20614.05 00:15:58.805 [2024-12-16T01:37:29.463Z] =================================================================================================================== 00:15:58.805 [2024-12-16T01:37:29.463Z] Total : 3885.15 15.18 0.00 0.00 32603.41 7060.01 20614.05 00:15:58.805 { 00:15:58.805 "results": [ 00:15:58.805 { 00:15:58.805 "job": "nvme0n1", 00:15:58.805 "core_mask": "0x2", 00:15:58.805 "workload": "verify", 00:15:58.805 "status": "finished", 00:15:58.805 "verify_range": { 00:15:58.805 "start": 0, 00:15:58.805 "length": 8192 00:15:58.805 }, 00:15:58.805 "queue_depth": 128, 00:15:58.805 "io_size": 4096, 00:15:58.805 "runtime": 1.021324, 00:15:58.805 "iops": 3885.1529974817004, 00:15:58.805 "mibps": 15.176378896412892, 00:15:58.805 "io_failed": 0, 00:15:58.805 "io_timeout": 0, 00:15:58.805 "avg_latency_us": 32603.409266862174, 00:15:58.805 "min_latency_us": 7060.014545454545, 00:15:58.805 "max_latency_us": 20614.05090909091 00:15:58.805 } 00:15:58.805 ], 00:15:58.805 "core_count": 1 00:15:58.805 } 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 86888 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86888 ']' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86888 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86888 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:58.805 killing process with pid 86888 00:15:58.805 Received shutdown signal, test time was about 1.000000 seconds 00:15:58.805 00:15:58.805 Latency(us) 00:15:58.805 [2024-12-16T01:37:29.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.805 [2024-12-16T01:37:29.463Z] =================================================================================================================== 00:15:58.805 [2024-12-16T01:37:29.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86888' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86888 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86888 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 86840 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86840 ']' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86840 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86840 00:15:58.805 killing process with pid 86840 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86840' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86840 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86840 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=86926 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 86926 00:15:58.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86926 ']' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.805 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.065 [2024-12-16 01:37:29.525132] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:59.065 [2024-12-16 01:37:29.525695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.065 [2024-12-16 01:37:29.676843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.065 [2024-12-16 01:37:29.697088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.065 [2024-12-16 01:37:29.697434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.065 [2024-12-16 01:37:29.697603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.065 [2024-12-16 01:37:29.697770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.065 [2024-12-16 01:37:29.697809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.065 [2024-12-16 01:37:29.698231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.324 [2024-12-16 01:37:29.730600] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.324 [2024-12-16 01:37:29.829859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.324 malloc0 00:15:59.324 [2024-12-16 01:37:29.856678] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:59.324 [2024-12-16 01:37:29.857063] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=86955 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 86955 /var/tmp/bdevperf.sock 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 86955 ']' 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.324 01:37:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.324 [2024-12-16 01:37:29.949292] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:15:59.324 [2024-12-16 01:37:29.949393] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86955 ] 00:15:59.584 [2024-12-16 01:37:30.101159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.584 [2024-12-16 01:37:30.123469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.584 [2024-12-16 01:37:30.155835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.584 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.584 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:59.584 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.2AKhlA2Ph1 00:16:00.152 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:00.152 [2024-12-16 01:37:30.766555] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:00.423 nvme0n1 00:16:00.423 01:37:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:00.423 Running I/O for 1 seconds... 00:16:01.402 3840.00 IOPS, 15.00 MiB/s 00:16:01.402 Latency(us) 00:16:01.402 [2024-12-16T01:37:32.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.402 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:01.402 Verification LBA range: start 0x0 length 0x2000 00:16:01.402 nvme0n1 : 1.03 3862.35 15.09 0.00 0.00 32768.68 7447.27 20614.05 00:16:01.402 [2024-12-16T01:37:32.060Z] =================================================================================================================== 00:16:01.402 [2024-12-16T01:37:32.060Z] Total : 3862.35 15.09 0.00 0.00 32768.68 7447.27 20614.05 00:16:01.402 { 00:16:01.402 "results": [ 00:16:01.402 { 00:16:01.402 "job": "nvme0n1", 00:16:01.402 "core_mask": "0x2", 00:16:01.402 "workload": "verify", 00:16:01.402 "status": "finished", 00:16:01.402 "verify_range": { 00:16:01.402 "start": 0, 00:16:01.402 "length": 8192 00:16:01.402 }, 00:16:01.402 "queue_depth": 128, 00:16:01.402 "io_size": 4096, 00:16:01.402 "runtime": 1.027353, 00:16:01.402 "iops": 3862.3530568363553, 00:16:01.402 "mibps": 15.087316628267013, 00:16:01.402 "io_failed": 0, 00:16:01.402 "io_timeout": 0, 00:16:01.402 "avg_latency_us": 32768.683167155425, 00:16:01.402 "min_latency_us": 7447.272727272727, 00:16:01.402 "max_latency_us": 20614.05090909091 00:16:01.402 } 00:16:01.402 ], 00:16:01.402 "core_count": 1 00:16:01.402 } 00:16:01.402 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:01.402 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.402 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:01.662 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.662 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:01.662 "subsystems": [ 00:16:01.662 { 00:16:01.662 "subsystem": "keyring", 00:16:01.662 "config": [ 00:16:01.662 { 00:16:01.662 "method": "keyring_file_add_key", 00:16:01.662 "params": { 00:16:01.662 "name": "key0", 00:16:01.662 "path": "/tmp/tmp.2AKhlA2Ph1" 00:16:01.662 } 00:16:01.662 } 00:16:01.662 ] 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "subsystem": "iobuf", 00:16:01.662 "config": [ 00:16:01.662 { 00:16:01.662 "method": "iobuf_set_options", 00:16:01.662 "params": { 00:16:01.662 "small_pool_count": 8192, 00:16:01.662 "large_pool_count": 1024, 00:16:01.662 "small_bufsize": 8192, 00:16:01.662 "large_bufsize": 135168, 00:16:01.662 "enable_numa": false 00:16:01.662 } 00:16:01.662 } 00:16:01.662 ] 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "subsystem": "sock", 00:16:01.662 "config": [ 00:16:01.662 { 00:16:01.662 "method": "sock_set_default_impl", 00:16:01.662 "params": { 00:16:01.662 "impl_name": "uring" 00:16:01.662 } 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "method": "sock_impl_set_options", 00:16:01.662 "params": { 00:16:01.662 "impl_name": "ssl", 00:16:01.662 "recv_buf_size": 4096, 00:16:01.662 "send_buf_size": 4096, 00:16:01.662 "enable_recv_pipe": true, 00:16:01.662 "enable_quickack": false, 00:16:01.662 "enable_placement_id": 0, 00:16:01.662 "enable_zerocopy_send_server": true, 00:16:01.662 "enable_zerocopy_send_client": false, 00:16:01.662 "zerocopy_threshold": 0, 00:16:01.662 "tls_version": 0, 00:16:01.662 "enable_ktls": false 00:16:01.662 } 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "method": "sock_impl_set_options", 00:16:01.662 "params": { 00:16:01.662 "impl_name": "posix", 00:16:01.662 "recv_buf_size": 2097152, 00:16:01.662 "send_buf_size": 2097152, 00:16:01.662 "enable_recv_pipe": true, 00:16:01.662 "enable_quickack": false, 00:16:01.662 "enable_placement_id": 0, 00:16:01.662 "enable_zerocopy_send_server": true, 00:16:01.662 "enable_zerocopy_send_client": false, 00:16:01.662 "zerocopy_threshold": 0, 00:16:01.662 "tls_version": 0, 00:16:01.662 "enable_ktls": false 00:16:01.662 } 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "method": "sock_impl_set_options", 00:16:01.662 "params": { 00:16:01.662 "impl_name": "uring", 00:16:01.662 "recv_buf_size": 2097152, 00:16:01.662 "send_buf_size": 2097152, 00:16:01.662 "enable_recv_pipe": true, 00:16:01.662 "enable_quickack": false, 00:16:01.662 "enable_placement_id": 0, 00:16:01.662 "enable_zerocopy_send_server": false, 00:16:01.662 "enable_zerocopy_send_client": false, 00:16:01.662 "zerocopy_threshold": 0, 00:16:01.662 "tls_version": 0, 00:16:01.662 "enable_ktls": false 00:16:01.662 } 00:16:01.662 } 00:16:01.662 ] 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "subsystem": "vmd", 00:16:01.662 "config": [] 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "subsystem": "accel", 00:16:01.662 "config": [ 00:16:01.662 { 00:16:01.662 "method": "accel_set_options", 00:16:01.662 "params": { 00:16:01.662 "small_cache_size": 128, 00:16:01.662 "large_cache_size": 16, 00:16:01.662 "task_count": 2048, 00:16:01.662 "sequence_count": 2048, 00:16:01.662 "buf_count": 2048 00:16:01.662 } 00:16:01.662 } 00:16:01.662 ] 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "subsystem": "bdev", 00:16:01.662 "config": [ 00:16:01.662 { 00:16:01.662 "method": "bdev_set_options", 00:16:01.662 "params": { 00:16:01.662 "bdev_io_pool_size": 65535, 00:16:01.662 "bdev_io_cache_size": 256, 00:16:01.662 "bdev_auto_examine": true, 00:16:01.662 "iobuf_small_cache_size": 128, 00:16:01.662 "iobuf_large_cache_size": 16 00:16:01.662 } 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "method": "bdev_raid_set_options", 00:16:01.662 "params": { 00:16:01.662 "process_window_size_kb": 1024, 00:16:01.662 "process_max_bandwidth_mb_sec": 0 00:16:01.662 } 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "method": "bdev_iscsi_set_options", 00:16:01.662 "params": { 00:16:01.662 "timeout_sec": 30 00:16:01.662 } 00:16:01.662 }, 00:16:01.662 { 00:16:01.662 "method": "bdev_nvme_set_options", 00:16:01.662 "params": { 00:16:01.662 "action_on_timeout": "none", 00:16:01.662 "timeout_us": 0, 00:16:01.662 "timeout_admin_us": 0, 00:16:01.662 "keep_alive_timeout_ms": 10000, 00:16:01.662 "arbitration_burst": 0, 00:16:01.663 "low_priority_weight": 0, 00:16:01.663 "medium_priority_weight": 0, 00:16:01.663 "high_priority_weight": 0, 00:16:01.663 "nvme_adminq_poll_period_us": 10000, 00:16:01.663 "nvme_ioq_poll_period_us": 0, 00:16:01.663 "io_queue_requests": 0, 00:16:01.663 "delay_cmd_submit": true, 00:16:01.663 "transport_retry_count": 4, 00:16:01.663 "bdev_retry_count": 3, 00:16:01.663 "transport_ack_timeout": 0, 00:16:01.663 "ctrlr_loss_timeout_sec": 0, 00:16:01.663 "reconnect_delay_sec": 0, 00:16:01.663 "fast_io_fail_timeout_sec": 0, 00:16:01.663 "disable_auto_failback": false, 00:16:01.663 "generate_uuids": false, 00:16:01.663 "transport_tos": 0, 00:16:01.663 "nvme_error_stat": false, 00:16:01.663 "rdma_srq_size": 0, 00:16:01.663 "io_path_stat": false, 00:16:01.663 "allow_accel_sequence": false, 00:16:01.663 "rdma_max_cq_size": 0, 00:16:01.663 "rdma_cm_event_timeout_ms": 0, 00:16:01.663 "dhchap_digests": [ 00:16:01.663 "sha256", 00:16:01.663 "sha384", 00:16:01.663 "sha512" 00:16:01.663 ], 00:16:01.663 "dhchap_dhgroups": [ 00:16:01.663 "null", 00:16:01.663 "ffdhe2048", 00:16:01.663 "ffdhe3072", 00:16:01.663 "ffdhe4096", 00:16:01.663 "ffdhe6144", 00:16:01.663 "ffdhe8192" 00:16:01.663 ], 00:16:01.663 "rdma_umr_per_io": false 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "bdev_nvme_set_hotplug", 00:16:01.663 "params": { 00:16:01.663 "period_us": 100000, 00:16:01.663 "enable": false 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "bdev_malloc_create", 00:16:01.663 "params": { 00:16:01.663 "name": "malloc0", 00:16:01.663 "num_blocks": 8192, 00:16:01.663 "block_size": 4096, 00:16:01.663 "physical_block_size": 4096, 00:16:01.663 "uuid": "4d83c612-eda1-4c59-9c4e-3d187259a018", 00:16:01.663 "optimal_io_boundary": 0, 00:16:01.663 "md_size": 0, 00:16:01.663 "dif_type": 0, 00:16:01.663 "dif_is_head_of_md": false, 00:16:01.663 "dif_pi_format": 0 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "bdev_wait_for_examine" 00:16:01.663 } 00:16:01.663 ] 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "subsystem": "nbd", 00:16:01.663 "config": [] 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "subsystem": "scheduler", 00:16:01.663 "config": [ 00:16:01.663 { 00:16:01.663 "method": "framework_set_scheduler", 00:16:01.663 "params": { 00:16:01.663 "name": "static" 00:16:01.663 } 00:16:01.663 } 00:16:01.663 ] 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "subsystem": "nvmf", 00:16:01.663 "config": [ 00:16:01.663 { 00:16:01.663 "method": "nvmf_set_config", 00:16:01.663 "params": { 00:16:01.663 "discovery_filter": "match_any", 00:16:01.663 "admin_cmd_passthru": { 00:16:01.663 "identify_ctrlr": false 00:16:01.663 }, 00:16:01.663 "dhchap_digests": [ 00:16:01.663 "sha256", 00:16:01.663 "sha384", 00:16:01.663 "sha512" 00:16:01.663 ], 00:16:01.663 "dhchap_dhgroups": [ 00:16:01.663 "null", 00:16:01.663 "ffdhe2048", 00:16:01.663 "ffdhe3072", 00:16:01.663 "ffdhe4096", 00:16:01.663 "ffdhe6144", 00:16:01.663 "ffdhe8192" 00:16:01.663 ] 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "nvmf_set_max_subsystems", 00:16:01.663 "params": { 00:16:01.663 "max_subsystems": 1024 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "nvmf_set_crdt", 00:16:01.663 "params": { 00:16:01.663 "crdt1": 0, 00:16:01.663 "crdt2": 0, 00:16:01.663 "crdt3": 0 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "nvmf_create_transport", 00:16:01.663 "params": { 00:16:01.663 "trtype": "TCP", 00:16:01.663 "max_queue_depth": 128, 00:16:01.663 "max_io_qpairs_per_ctrlr": 127, 00:16:01.663 "in_capsule_data_size": 4096, 00:16:01.663 "max_io_size": 131072, 00:16:01.663 "io_unit_size": 131072, 00:16:01.663 "max_aq_depth": 128, 00:16:01.663 "num_shared_buffers": 511, 00:16:01.663 "buf_cache_size": 4294967295, 00:16:01.663 "dif_insert_or_strip": false, 00:16:01.663 "zcopy": false, 00:16:01.663 "c2h_success": false, 00:16:01.663 "sock_priority": 0, 00:16:01.663 "abort_timeout_sec": 1, 00:16:01.663 "ack_timeout": 0, 00:16:01.663 "data_wr_pool_size": 0 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "nvmf_create_subsystem", 00:16:01.663 "params": { 00:16:01.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.663 "allow_any_host": false, 00:16:01.663 "serial_number": "00000000000000000000", 00:16:01.663 "model_number": "SPDK bdev Controller", 00:16:01.663 "max_namespaces": 32, 00:16:01.663 "min_cntlid": 1, 00:16:01.663 "max_cntlid": 65519, 00:16:01.663 "ana_reporting": false 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "nvmf_subsystem_add_host", 00:16:01.663 "params": { 00:16:01.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.663 "host": "nqn.2016-06.io.spdk:host1", 00:16:01.663 "psk": "key0" 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "nvmf_subsystem_add_ns", 00:16:01.663 "params": { 00:16:01.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.663 "namespace": { 00:16:01.663 "nsid": 1, 00:16:01.663 "bdev_name": "malloc0", 00:16:01.663 "nguid": "4D83C612EDA14C599C4E3D187259A018", 00:16:01.663 "uuid": "4d83c612-eda1-4c59-9c4e-3d187259a018", 00:16:01.663 "no_auto_visible": false 00:16:01.663 } 00:16:01.663 } 00:16:01.663 }, 00:16:01.663 { 00:16:01.663 "method": "nvmf_subsystem_add_listener", 00:16:01.663 "params": { 00:16:01.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.663 "listen_address": { 00:16:01.663 "trtype": "TCP", 00:16:01.663 "adrfam": "IPv4", 00:16:01.663 "traddr": "10.0.0.3", 00:16:01.663 "trsvcid": "4420" 00:16:01.663 }, 00:16:01.663 "secure_channel": false, 00:16:01.663 "sock_impl": "ssl" 00:16:01.663 } 00:16:01.663 } 00:16:01.663 ] 00:16:01.663 } 00:16:01.663 ] 00:16:01.663 }' 00:16:01.663 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:01.923 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:01.923 "subsystems": [ 00:16:01.923 { 00:16:01.923 "subsystem": "keyring", 00:16:01.923 "config": [ 00:16:01.923 { 00:16:01.923 "method": "keyring_file_add_key", 00:16:01.923 "params": { 00:16:01.923 "name": "key0", 00:16:01.923 "path": "/tmp/tmp.2AKhlA2Ph1" 00:16:01.923 } 00:16:01.923 } 00:16:01.923 ] 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "subsystem": "iobuf", 00:16:01.923 "config": [ 00:16:01.923 { 00:16:01.923 "method": "iobuf_set_options", 00:16:01.923 "params": { 00:16:01.923 "small_pool_count": 8192, 00:16:01.923 "large_pool_count": 1024, 00:16:01.923 "small_bufsize": 8192, 00:16:01.923 "large_bufsize": 135168, 00:16:01.923 "enable_numa": false 00:16:01.923 } 00:16:01.923 } 00:16:01.923 ] 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "subsystem": "sock", 00:16:01.923 "config": [ 00:16:01.923 { 00:16:01.923 "method": "sock_set_default_impl", 00:16:01.923 "params": { 00:16:01.923 "impl_name": "uring" 00:16:01.923 } 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "method": "sock_impl_set_options", 00:16:01.923 "params": { 00:16:01.923 "impl_name": "ssl", 00:16:01.923 "recv_buf_size": 4096, 00:16:01.923 "send_buf_size": 4096, 00:16:01.923 "enable_recv_pipe": true, 00:16:01.923 "enable_quickack": false, 00:16:01.923 "enable_placement_id": 0, 00:16:01.923 "enable_zerocopy_send_server": true, 00:16:01.923 "enable_zerocopy_send_client": false, 00:16:01.923 "zerocopy_threshold": 0, 00:16:01.923 "tls_version": 0, 00:16:01.923 "enable_ktls": false 00:16:01.923 } 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "method": "sock_impl_set_options", 00:16:01.923 "params": { 00:16:01.923 "impl_name": "posix", 00:16:01.923 "recv_buf_size": 2097152, 00:16:01.923 "send_buf_size": 2097152, 00:16:01.923 "enable_recv_pipe": true, 00:16:01.923 "enable_quickack": false, 00:16:01.923 "enable_placement_id": 0, 00:16:01.923 "enable_zerocopy_send_server": true, 00:16:01.923 "enable_zerocopy_send_client": false, 00:16:01.923 "zerocopy_threshold": 0, 00:16:01.923 "tls_version": 0, 00:16:01.923 "enable_ktls": false 00:16:01.923 } 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "method": "sock_impl_set_options", 00:16:01.923 "params": { 00:16:01.923 "impl_name": "uring", 00:16:01.923 "recv_buf_size": 2097152, 00:16:01.923 "send_buf_size": 2097152, 00:16:01.923 "enable_recv_pipe": true, 00:16:01.923 "enable_quickack": false, 00:16:01.923 "enable_placement_id": 0, 00:16:01.923 "enable_zerocopy_send_server": false, 00:16:01.923 "enable_zerocopy_send_client": false, 00:16:01.923 "zerocopy_threshold": 0, 00:16:01.923 "tls_version": 0, 00:16:01.923 "enable_ktls": false 00:16:01.923 } 00:16:01.923 } 00:16:01.923 ] 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "subsystem": "vmd", 00:16:01.923 "config": [] 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "subsystem": "accel", 00:16:01.923 "config": [ 00:16:01.923 { 00:16:01.923 "method": "accel_set_options", 00:16:01.923 "params": { 00:16:01.923 "small_cache_size": 128, 00:16:01.923 "large_cache_size": 16, 00:16:01.923 "task_count": 2048, 00:16:01.923 "sequence_count": 2048, 00:16:01.923 "buf_count": 2048 00:16:01.923 } 00:16:01.923 } 00:16:01.923 ] 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "subsystem": "bdev", 00:16:01.923 "config": [ 00:16:01.923 { 00:16:01.923 "method": "bdev_set_options", 00:16:01.923 "params": { 00:16:01.923 "bdev_io_pool_size": 65535, 00:16:01.923 "bdev_io_cache_size": 256, 00:16:01.923 "bdev_auto_examine": true, 00:16:01.923 "iobuf_small_cache_size": 128, 00:16:01.923 "iobuf_large_cache_size": 16 00:16:01.923 } 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "method": "bdev_raid_set_options", 00:16:01.923 "params": { 00:16:01.923 "process_window_size_kb": 1024, 00:16:01.923 "process_max_bandwidth_mb_sec": 0 00:16:01.923 } 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "method": "bdev_iscsi_set_options", 00:16:01.923 "params": { 00:16:01.923 "timeout_sec": 30 00:16:01.923 } 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "method": "bdev_nvme_set_options", 00:16:01.923 "params": { 00:16:01.923 "action_on_timeout": "none", 00:16:01.923 "timeout_us": 0, 00:16:01.923 "timeout_admin_us": 0, 00:16:01.923 "keep_alive_timeout_ms": 10000, 00:16:01.923 "arbitration_burst": 0, 00:16:01.923 "low_priority_weight": 0, 00:16:01.923 "medium_priority_weight": 0, 00:16:01.923 "high_priority_weight": 0, 00:16:01.923 "nvme_adminq_poll_period_us": 10000, 00:16:01.923 "nvme_ioq_poll_period_us": 0, 00:16:01.923 "io_queue_requests": 512, 00:16:01.923 "delay_cmd_submit": true, 00:16:01.923 "transport_retry_count": 4, 00:16:01.923 "bdev_retry_count": 3, 00:16:01.923 "transport_ack_timeout": 0, 00:16:01.923 "ctrlr_loss_timeout_sec": 0, 00:16:01.923 "reconnect_delay_sec": 0, 00:16:01.923 "fast_io_fail_timeout_sec": 0, 00:16:01.923 "disable_auto_failback": false, 00:16:01.923 "generate_uuids": false, 00:16:01.923 "transport_tos": 0, 00:16:01.923 "nvme_error_stat": false, 00:16:01.923 "rdma_srq_size": 0, 00:16:01.923 "io_path_stat": false, 00:16:01.923 "allow_accel_sequence": false, 00:16:01.923 "rdma_max_cq_size": 0, 00:16:01.923 "rdma_cm_event_timeout_ms": 0, 00:16:01.923 "dhchap_digests": [ 00:16:01.923 "sha256", 00:16:01.923 "sha384", 00:16:01.923 "sha512" 00:16:01.923 ], 00:16:01.923 "dhchap_dhgroups": [ 00:16:01.923 "null", 00:16:01.923 "ffdhe2048", 00:16:01.923 "ffdhe3072", 00:16:01.923 "ffdhe4096", 00:16:01.923 "ffdhe6144", 00:16:01.923 "ffdhe8192" 00:16:01.923 ], 00:16:01.923 "rdma_umr_per_io": false 00:16:01.923 } 00:16:01.923 }, 00:16:01.923 { 00:16:01.923 "method": "bdev_nvme_attach_controller", 00:16:01.923 "params": { 00:16:01.923 "name": "nvme0", 00:16:01.923 "trtype": "TCP", 00:16:01.923 "adrfam": "IPv4", 00:16:01.923 "traddr": "10.0.0.3", 00:16:01.923 "trsvcid": "4420", 00:16:01.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:01.923 "prchk_reftag": false, 00:16:01.923 "prchk_guard": false, 00:16:01.923 "ctrlr_loss_timeout_sec": 0, 00:16:01.923 "reconnect_delay_sec": 0, 00:16:01.923 "fast_io_fail_timeout_sec": 0, 00:16:01.923 "psk": "key0", 00:16:01.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.923 "hdgst": false, 00:16:01.923 "ddgst": false, 00:16:01.923 "multipath": "multipath" 00:16:01.924 } 00:16:01.924 }, 00:16:01.924 { 00:16:01.924 "method": "bdev_nvme_set_hotplug", 00:16:01.924 "params": { 00:16:01.924 "period_us": 100000, 00:16:01.924 "enable": false 00:16:01.924 } 00:16:01.924 }, 00:16:01.924 { 00:16:01.924 "method": "bdev_enable_histogram", 00:16:01.924 "params": { 00:16:01.924 "name": "nvme0n1", 00:16:01.924 "enable": true 00:16:01.924 } 00:16:01.924 }, 00:16:01.924 { 00:16:01.924 "method": "bdev_wait_for_examine" 00:16:01.924 } 00:16:01.924 ] 00:16:01.924 }, 00:16:01.924 { 00:16:01.924 "subsystem": "nbd", 00:16:01.924 "config": [] 00:16:01.924 } 00:16:01.924 ] 00:16:01.924 }' 00:16:01.924 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 86955 00:16:01.924 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86955 ']' 00:16:01.924 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86955 00:16:01.924 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:01.924 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.924 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86955 00:16:02.183 killing process with pid 86955 00:16:02.183 Received shutdown signal, test time was about 1.000000 seconds 00:16:02.183 00:16:02.183 Latency(us) 00:16:02.183 [2024-12-16T01:37:32.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.183 [2024-12-16T01:37:32.841Z] =================================================================================================================== 00:16:02.183 [2024-12-16T01:37:32.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86955' 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86955 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86955 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 86926 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 86926 ']' 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 86926 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86926 00:16:02.183 killing process with pid 86926 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86926' 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 86926 00:16:02.183 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 86926 00:16:02.443 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:02.443 "subsystems": [ 00:16:02.443 { 00:16:02.443 "subsystem": "keyring", 00:16:02.443 "config": [ 00:16:02.443 { 00:16:02.443 "method": "keyring_file_add_key", 00:16:02.443 "params": { 00:16:02.443 "name": "key0", 00:16:02.443 "path": "/tmp/tmp.2AKhlA2Ph1" 00:16:02.443 } 00:16:02.443 } 00:16:02.443 ] 00:16:02.443 }, 00:16:02.443 { 00:16:02.443 "subsystem": "iobuf", 00:16:02.443 "config": [ 00:16:02.443 { 00:16:02.443 "method": "iobuf_set_options", 00:16:02.443 "params": { 00:16:02.443 "small_pool_count": 8192, 00:16:02.443 "large_pool_count": 1024, 00:16:02.443 "small_bufsize": 8192, 00:16:02.443 "large_bufsize": 135168, 00:16:02.443 "enable_numa": false 00:16:02.443 } 00:16:02.443 } 00:16:02.443 ] 00:16:02.443 }, 00:16:02.443 { 00:16:02.443 "subsystem": "sock", 00:16:02.443 "config": [ 00:16:02.443 { 00:16:02.443 "method": "sock_set_default_impl", 00:16:02.443 "params": { 00:16:02.443 "impl_name": "uring" 00:16:02.443 } 00:16:02.443 }, 00:16:02.443 { 00:16:02.443 "method": "sock_impl_set_options", 00:16:02.443 "params": { 00:16:02.443 "impl_name": "ssl", 00:16:02.443 "recv_buf_size": 4096, 00:16:02.443 "send_buf_size": 4096, 00:16:02.443 "enable_recv_pipe": true, 00:16:02.443 "enable_quickack": false, 00:16:02.443 "enable_placement_id": 0, 00:16:02.443 "enable_zerocopy_send_server": true, 00:16:02.443 "enable_zerocopy_send_client": false, 00:16:02.443 "zerocopy_threshold": 0, 00:16:02.443 "tls_version": 0, 00:16:02.443 "enable_ktls": false 00:16:02.443 } 00:16:02.443 }, 00:16:02.443 { 00:16:02.443 "method": "sock_impl_set_options", 00:16:02.443 "params": { 00:16:02.443 "impl_name": "posix", 00:16:02.443 "recv_buf_size": 2097152, 00:16:02.443 "send_buf_size": 2097152, 00:16:02.443 "enable_recv_pipe": true, 00:16:02.443 "enable_quickack": false, 00:16:02.443 "enable_placement_id": 0, 00:16:02.443 "enable_zerocopy_send_server": true, 00:16:02.443 "enable_zerocopy_send_client": false, 00:16:02.443 "zerocopy_threshold": 0, 00:16:02.443 "tls_version": 0, 00:16:02.443 "enable_ktls": false 00:16:02.443 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "sock_impl_set_options", 00:16:02.444 "params": { 00:16:02.444 "impl_name": "uring", 00:16:02.444 "recv_buf_size": 2097152, 00:16:02.444 "send_buf_size": 2097152, 00:16:02.444 "enable_recv_pipe": true, 00:16:02.444 "enable_quickack": false, 00:16:02.444 "enable_placement_id": 0, 00:16:02.444 "enable_zerocopy_send_server": false, 00:16:02.444 "enable_zerocopy_send_client": false, 00:16:02.444 "zerocopy_threshold": 0, 00:16:02.444 "tls_version": 0, 00:16:02.444 "enable_ktls": false 00:16:02.444 } 00:16:02.444 } 00:16:02.444 ] 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "subsystem": "vmd", 00:16:02.444 "config": [] 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "subsystem": "accel", 00:16:02.444 "config": [ 00:16:02.444 { 00:16:02.444 "method": "accel_set_options", 00:16:02.444 "params": { 00:16:02.444 "small_cache_size": 128, 00:16:02.444 "large_cache_size": 16, 00:16:02.444 "task_count": 2048, 00:16:02.444 "sequence_count": 2048, 00:16:02.444 "buf_count": 2048 00:16:02.444 } 00:16:02.444 } 00:16:02.444 ] 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "subsystem": "bdev", 00:16:02.444 "config": [ 00:16:02.444 { 00:16:02.444 "method": "bdev_set_options", 00:16:02.444 "params": { 00:16:02.444 "bdev_io_pool_size": 65535, 00:16:02.444 "bdev_io_cache_size": 256, 00:16:02.444 "bdev_auto_examine": true, 00:16:02.444 "iobuf_small_cache_size": 128, 00:16:02.444 "iobuf_large_cache_size": 16 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "bdev_raid_set_options", 00:16:02.444 "params": { 00:16:02.444 "process_window_size_kb": 1024, 00:16:02.444 "process_max_bandwidth_mb_sec": 0 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "bdev_iscsi_set_options", 00:16:02.444 "params": { 00:16:02.444 "timeout_sec": 30 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "bdev_nvme_set_options", 00:16:02.444 "params": { 00:16:02.444 "action_on_timeout": "none", 00:16:02.444 "timeout_us": 0, 00:16:02.444 "timeout_admin_us": 0, 00:16:02.444 "keep_alive_timeout_ms": 10000, 00:16:02.444 "arbitration_burst": 0, 00:16:02.444 "low_priority_weight": 0, 00:16:02.444 "medium_priority_weight": 0, 00:16:02.444 "high_priority_weight": 0, 00:16:02.444 "nvme_adminq_poll_period_us": 10000, 00:16:02.444 "nvme_ioq_poll_period_us": 0, 00:16:02.444 "io_queue_requests": 0, 00:16:02.444 "delay_cmd_submit": true, 00:16:02.444 "transport_retry_count": 4, 00:16:02.444 "bdev_retry_count": 3, 00:16:02.444 "transport_ack_timeout": 0, 00:16:02.444 "ctrlr_loss_timeout_sec": 0, 00:16:02.444 "reconnect_delay_sec": 0, 00:16:02.444 "fast_io_fail_timeout_sec": 0, 00:16:02.444 "disable_auto_failback": false, 00:16:02.444 "generate_uuids": false, 00:16:02.444 "transport_tos": 0, 00:16:02.444 "nvme_error_stat": false, 00:16:02.444 "rdma_srq_size": 0, 00:16:02.444 "io_path_stat": false, 00:16:02.444 "allow_accel_sequence": false, 00:16:02.444 "rdma_max_cq_size": 0, 00:16:02.444 "rdma_cm_event_timeout_ms": 0, 00:16:02.444 "dhchap_digests": [ 00:16:02.444 "sha256", 00:16:02.444 "sha384", 00:16:02.444 "sha512" 00:16:02.444 ], 00:16:02.444 "dhchap_dhgroups": [ 00:16:02.444 "null", 00:16:02.444 "ffdhe2048", 00:16:02.444 "ffdhe3072", 00:16:02.444 "ffdhe4096", 00:16:02.444 "ffdhe6144", 00:16:02.444 "ffdhe8192" 00:16:02.444 ], 00:16:02.444 "rdma_umr_per_io": false 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "bdev_nvme_set_hotplug", 00:16:02.444 "params": { 00:16:02.444 "period_us": 100000, 00:16:02.444 "enable": false 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "bdev_malloc_create", 00:16:02.444 "params": { 00:16:02.444 "name": "malloc0", 00:16:02.444 "num_blocks": 8192, 00:16:02.444 "block_size": 4096, 00:16:02.444 "physical_block_size": 4096, 00:16:02.444 "uuid": "4d83c612-eda1-4c59-9c4e-3d187259a018", 00:16:02.444 "optimal_io_boundary": 0, 00:16:02.444 "md_size": 0, 00:16:02.444 "dif_type": 0, 00:16:02.444 "dif_is_head_of_md": false, 00:16:02.444 "dif_pi_format": 0 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "bdev_wait_for_examine" 00:16:02.444 } 00:16:02.444 ] 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "subsystem": "nbd", 00:16:02.444 "config": [] 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "subsystem": "scheduler", 00:16:02.444 "config": [ 00:16:02.444 { 00:16:02.444 "method": "framework_set_scheduler", 00:16:02.444 "params": { 00:16:02.444 "name": "static" 00:16:02.444 } 00:16:02.444 } 00:16:02.444 ] 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "subsystem": "nvmf", 00:16:02.444 "config": [ 00:16:02.444 { 00:16:02.444 "method": "nvmf_set_config", 00:16:02.444 "params": { 00:16:02.444 "discovery_filter": "match_any", 00:16:02.444 "admin_cmd_passthru": { 00:16:02.444 "identify_ctrlr": false 00:16:02.444 }, 00:16:02.444 "dhchap_digests": [ 00:16:02.444 "sha256", 00:16:02.444 "sha384", 00:16:02.444 "sha512" 00:16:02.444 ], 00:16:02.444 "dhchap_dhgroups": [ 00:16:02.444 "null", 00:16:02.444 "ffdhe2048", 00:16:02.444 "ffdhe3072", 00:16:02.444 "ffdhe4096", 00:16:02.444 "ffdhe6144", 00:16:02.444 "ffdhe8192" 00:16:02.444 ] 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "nvmf_set_max_subsystems", 00:16:02.444 "params": { 00:16:02.444 "max_subsystems": 1024 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "nvmf_set_crdt", 00:16:02.444 "params": { 00:16:02.444 "crdt1": 0, 00:16:02.444 "crdt2": 0, 00:16:02.444 "crdt3": 0 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "nvmf_create_transport", 00:16:02.444 "params": { 00:16:02.444 "trtype": "TCP", 00:16:02.444 "max_queue_depth": 128, 00:16:02.444 "max_io_qpairs_per_ctrlr": 127, 00:16:02.444 "in_capsule_data_size": 4096, 00:16:02.444 "max_io_size": 131072, 00:16:02.444 "io_unit_size": 131072, 00:16:02.444 "max_aq_depth": 128, 00:16:02.444 "num_shared_buffers": 511, 00:16:02.444 "buf_cache_size": 4294967295, 00:16:02.444 "dif_insert_or_strip": false, 00:16:02.444 "zcopy": false, 00:16:02.444 "c2h_success": false, 00:16:02.444 "sock_priority": 0, 00:16:02.444 "abort_timeout_sec": 1, 00:16:02.444 "ack_timeout": 0, 00:16:02.444 "data_wr_pool_size": 0 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "nvmf_create_subsystem", 00:16:02.444 "params": { 00:16:02.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.444 "allow_any_host": false, 00:16:02.444 "serial_number": "00000000000000000000", 00:16:02.444 "model_number": "SPDK bdev Controller", 00:16:02.444 "max_namespaces": 32, 00:16:02.444 "min_cntlid": 1, 00:16:02.444 "max_cntlid": 65519, 00:16:02.444 "ana_reporting": false 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "nvmf_subsystem_add_host", 00:16:02.444 "params": { 00:16:02.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.444 "host": "nqn.2016-06.io.spdk:host1", 00:16:02.444 "psk": "key0" 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "nvmf_subsystem_add_ns", 00:16:02.444 "params": { 00:16:02.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.444 "namespace": { 00:16:02.444 "nsid": 1, 00:16:02.444 "bdev_name": "malloc0", 00:16:02.444 "nguid": "4D83C612EDA14C599C4E3D187259A018", 00:16:02.444 "uuid": "4d83c612-eda1-4c59-9c4e-3d187259a018", 00:16:02.444 "no_auto_visible": false 00:16:02.444 } 00:16:02.444 } 00:16:02.444 }, 00:16:02.444 { 00:16:02.444 "method": "nvmf_subsystem_add_listener", 00:16:02.444 "params": { 00:16:02.444 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:02.444 "listen_address": { 00:16:02.444 "trtype": "TCP", 00:16:02.444 "adrfam": "IPv4", 00:16:02.444 "traddr": "10.0.0.3", 00:16:02.444 "trsvcid": "4420" 00:16:02.444 }, 00:16:02.444 "secure_channel": false, 00:16:02.444 "sock_impl": "ssl" 00:16:02.444 } 00:16:02.444 } 00:16:02.444 ] 00:16:02.444 } 00:16:02.444 ] 00:16:02.444 }' 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=87004 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 87004 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 87004 ']' 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.444 01:37:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.445 [2024-12-16 01:37:32.972032] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:02.445 [2024-12-16 01:37:32.972356] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.704 [2024-12-16 01:37:33.119413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.704 [2024-12-16 01:37:33.138790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.704 [2024-12-16 01:37:33.138842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.704 [2024-12-16 01:37:33.138868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.704 [2024-12-16 01:37:33.138876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.704 [2024-12-16 01:37:33.138882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.704 [2024-12-16 01:37:33.139181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.704 [2024-12-16 01:37:33.283960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:02.704 [2024-12-16 01:37:33.341631] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.963 [2024-12-16 01:37:33.373578] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:02.963 [2024-12-16 01:37:33.373827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:03.531 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.531 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:03.531 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:03.531 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:03.531 01:37:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.531 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.531 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=87036 00:16:03.531 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 87036 /var/tmp/bdevperf.sock 00:16:03.531 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 87036 ']' 00:16:03.531 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:03.531 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:03.531 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:03.531 "subsystems": [ 00:16:03.531 { 00:16:03.531 "subsystem": "keyring", 00:16:03.531 "config": [ 00:16:03.531 { 00:16:03.531 "method": "keyring_file_add_key", 00:16:03.531 "params": { 00:16:03.531 "name": "key0", 00:16:03.531 "path": "/tmp/tmp.2AKhlA2Ph1" 00:16:03.531 } 00:16:03.531 } 00:16:03.531 ] 00:16:03.531 }, 00:16:03.531 { 00:16:03.531 "subsystem": "iobuf", 00:16:03.531 "config": [ 00:16:03.531 { 00:16:03.531 "method": "iobuf_set_options", 00:16:03.531 "params": { 00:16:03.531 "small_pool_count": 8192, 00:16:03.531 "large_pool_count": 1024, 00:16:03.531 "small_bufsize": 8192, 00:16:03.531 "large_bufsize": 135168, 00:16:03.531 "enable_numa": false 00:16:03.531 } 00:16:03.531 } 00:16:03.531 ] 00:16:03.531 }, 00:16:03.531 { 00:16:03.531 "subsystem": "sock", 00:16:03.531 "config": [ 00:16:03.531 { 00:16:03.531 "method": "sock_set_default_impl", 00:16:03.531 "params": { 00:16:03.531 "impl_name": "uring" 00:16:03.531 } 00:16:03.531 }, 00:16:03.531 { 00:16:03.531 "method": "sock_impl_set_options", 00:16:03.531 "params": { 00:16:03.531 "impl_name": "ssl", 00:16:03.531 "recv_buf_size": 4096, 00:16:03.531 "send_buf_size": 4096, 00:16:03.531 "enable_recv_pipe": true, 00:16:03.531 "enable_quickack": false, 00:16:03.531 "enable_placement_id": 0, 00:16:03.531 "enable_zerocopy_send_server": true, 00:16:03.531 "enable_zerocopy_send_client": false, 00:16:03.531 "zerocopy_threshold": 0, 00:16:03.531 "tls_version": 0, 00:16:03.531 "enable_ktls": false 00:16:03.531 } 00:16:03.531 }, 00:16:03.531 { 00:16:03.531 "method": "sock_impl_set_options", 00:16:03.531 "params": { 00:16:03.531 "impl_name": "posix", 00:16:03.531 "recv_buf_size": 2097152, 00:16:03.531 "send_buf_size": 2097152, 00:16:03.531 "enable_recv_pipe": true, 00:16:03.531 "enable_quickack": false, 00:16:03.531 "enable_placement_id": 0, 00:16:03.531 "enable_zerocopy_send_server": true, 00:16:03.531 "enable_zerocopy_send_client": false, 00:16:03.531 "zerocopy_threshold": 0, 00:16:03.531 "tls_version": 0, 00:16:03.531 "enable_ktls": false 00:16:03.531 } 00:16:03.531 }, 00:16:03.531 { 00:16:03.531 "method": "sock_impl_set_options", 00:16:03.531 "params": { 00:16:03.531 "impl_name": "uring", 00:16:03.531 "recv_buf_size": 2097152, 00:16:03.531 "send_buf_size": 2097152, 00:16:03.531 "enable_recv_pipe": true, 00:16:03.531 "enable_quickack": false, 00:16:03.531 "enable_placement_id": 0, 00:16:03.531 "enable_zerocopy_send_server": false, 00:16:03.531 "enable_zerocopy_send_client": false, 00:16:03.531 "zerocopy_threshold": 0, 00:16:03.531 "tls_version": 0, 00:16:03.531 "enable_ktls": false 00:16:03.531 } 00:16:03.531 } 00:16:03.531 ] 00:16:03.531 }, 00:16:03.531 { 00:16:03.531 "subsystem": "vmd", 00:16:03.531 "config": [] 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "subsystem": "accel", 00:16:03.532 "config": [ 00:16:03.532 { 00:16:03.532 "method": "accel_set_options", 00:16:03.532 "params": { 00:16:03.532 "small_cache_size": 128, 00:16:03.532 "large_cache_size": 16, 00:16:03.532 "task_count": 2048, 00:16:03.532 "sequence_count": 2048, 00:16:03.532 "buf_count": 2048 00:16:03.532 } 00:16:03.532 } 00:16:03.532 ] 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "subsystem": "bdev", 00:16:03.532 "config": [ 00:16:03.532 { 00:16:03.532 "method": "bdev_set_options", 00:16:03.532 "params": { 00:16:03.532 "bdev_io_pool_size": 65535, 00:16:03.532 "bdev_io_cache_size": 256, 00:16:03.532 "bdev_auto_examine": true, 00:16:03.532 "iobuf_small_cache_size": 128, 00:16:03.532 "iobuf_large_cache_size": 16 00:16:03.532 } 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "method": "bdev_raid_set_options", 00:16:03.532 "params": { 00:16:03.532 "process_window_size_kb": 1024, 00:16:03.532 "process_max_bandwidth_mb_sec": 0 00:16:03.532 } 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "method": "bdev_iscsi_set_options", 00:16:03.532 "params": { 00:16:03.532 "timeout_sec": 30 00:16:03.532 } 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "method": "bdev_nvme_set_options", 00:16:03.532 "params": { 00:16:03.532 "action_on_timeout": "none", 00:16:03.532 "timeout_us": 0, 00:16:03.532 "timeout_admin_us": 0, 00:16:03.532 "keep_alive_timeout_ms": 10000, 00:16:03.532 "arbitration_burst": 0, 00:16:03.532 "low_priority_weight": 0, 00:16:03.532 "medium_priority_weight": 0, 00:16:03.532 "high_priority_weight": 0, 00:16:03.532 "nvme_adminq_poll_period_us": 10000, 00:16:03.532 "nvme_ioq_poll_period_us": 0, 00:16:03.532 "io_queue_requests": 512, 00:16:03.532 "delay_cmd_submit": true, 00:16:03.532 "transport_retry_count": 4, 00:16:03.532 "bdev_retry_count": 3, 00:16:03.532 "transport_ack_timeout": 0, 00:16:03.532 "ctrlr_loss_timeout_sec": 0, 00:16:03.532 "reconnect_delay_sec": 0, 00:16:03.532 "fast_io_fail_timeout_sec": 0, 00:16:03.532 "disable_auto_failback": false, 00:16:03.532 "generate_uuids": false, 00:16:03.532 "transport_tos": 0, 00:16:03.532 "nvme_error_stat": false, 00:16:03.532 "rdma_srq_size": 0, 00:16:03.532 "io_path_stat": false, 00:16:03.532 "allow_accel_sequence": false, 00:16:03.532 "rdma_max_cq_size": 0, 00:16:03.532 "rdma_cm_event_timeout_ms": 0, 00:16:03.532 "dhchap_digests": [ 00:16:03.532 "sha256", 00:16:03.532 "sha384", 00:16:03.532 "sha512" 00:16:03.532 ], 00:16:03.532 "dhchap_dhgroups": [ 00:16:03.532 "null", 00:16:03.532 "ffdhe2048", 00:16:03.532 "ffdhe3072", 00:16:03.532 "ffdhe4096", 00:16:03.532 "ffdhe6144", 00:16:03.532 "ffdhe8192" 00:16:03.532 ], 00:16:03.532 "rdma_umr_per_io": false 00:16:03.532 } 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "method": "bdev_nvme_attach_controller", 00:16:03.532 "params": { 00:16:03.532 "name": "nvme0", 00:16:03.532 "trtype": "TCP", 00:16:03.532 "adrfam": "IPv4", 00:16:03.532 "traddr": "10.0.0.3", 00:16:03.532 "trsvcid": "4420", 00:16:03.532 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.532 "prchk_reftag": false, 00:16:03.532 "prchk_guard": false, 00:16:03.532 "ctrlr_loss_timeout_sec": 0, 00:16:03.532 "reconnect_delay_sec": 0, 00:16:03.532 "fast_io_fail_timeout_sec": 0, 00:16:03.532 "psk": "key0", 00:16:03.532 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.532 "hdgst": false, 00:16:03.532 "ddgst": false, 00:16:03.532 "multipath": "multipath" 00:16:03.532 } 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "method": "bdev_nvme_set_hotplug", 00:16:03.532 "params": { 00:16:03.532 "period_us": 100000, 00:16:03.532 "enable": false 00:16:03.532 } 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "method": "bdev_enable_histogram", 00:16:03.532 "params": { 00:16:03.532 "name": "nvme0n1", 00:16:03.532 "enable": true 00:16:03.532 } 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "method": "bdev_wait_for_examine" 00:16:03.532 } 00:16:03.532 ] 00:16:03.532 }, 00:16:03.532 { 00:16:03.532 "subsystem": "nbd", 00:16:03.532 "config": [] 00:16:03.532 } 00:16:03.532 ] 00:16:03.532 }' 00:16:03.532 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:03.532 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:03.532 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.532 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:03.532 [2024-12-16 01:37:34.095149] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:03.532 [2024-12-16 01:37:34.095539] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87036 ] 00:16:03.791 [2024-12-16 01:37:34.250594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.791 [2024-12-16 01:37:34.277565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.791 [2024-12-16 01:37:34.397594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:03.791 [2024-12-16 01:37:34.431806] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:04.728 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.728 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:04.728 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:04.728 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:04.986 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.986 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:04.986 Running I/O for 1 seconds... 00:16:05.922 4608.00 IOPS, 18.00 MiB/s 00:16:05.922 Latency(us) 00:16:05.923 [2024-12-16T01:37:36.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.923 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:05.923 Verification LBA range: start 0x0 length 0x2000 00:16:05.923 nvme0n1 : 1.02 4650.19 18.16 0.00 0.00 27258.52 9055.88 20614.05 00:16:05.923 [2024-12-16T01:37:36.581Z] =================================================================================================================== 00:16:05.923 [2024-12-16T01:37:36.581Z] Total : 4650.19 18.16 0.00 0.00 27258.52 9055.88 20614.05 00:16:05.923 { 00:16:05.923 "results": [ 00:16:05.923 { 00:16:05.923 "job": "nvme0n1", 00:16:05.923 "core_mask": "0x2", 00:16:05.923 "workload": "verify", 00:16:05.923 "status": "finished", 00:16:05.923 "verify_range": { 00:16:05.923 "start": 0, 00:16:05.923 "length": 8192 00:16:05.923 }, 00:16:05.923 "queue_depth": 128, 00:16:05.923 "io_size": 4096, 00:16:05.923 "runtime": 1.018454, 00:16:05.923 "iops": 4650.185477203683, 00:16:05.923 "mibps": 18.164787020326887, 00:16:05.923 "io_failed": 0, 00:16:05.923 "io_timeout": 0, 00:16:05.923 "avg_latency_us": 27258.521474201472, 00:16:05.923 "min_latency_us": 9055.883636363636, 00:16:05.923 "max_latency_us": 20614.05090909091 00:16:05.923 } 00:16:05.923 ], 00:16:05.923 "core_count": 1 00:16:05.923 } 00:16:05.923 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:05.923 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:05.923 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:05.923 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:05.923 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:05.923 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:05.923 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:06.182 nvmf_trace.0 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 87036 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 87036 ']' 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 87036 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87036 00:16:06.182 killing process with pid 87036 00:16:06.182 Received shutdown signal, test time was about 1.000000 seconds 00:16:06.182 00:16:06.182 Latency(us) 00:16:06.182 [2024-12-16T01:37:36.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.182 [2024-12-16T01:37:36.840Z] =================================================================================================================== 00:16:06.182 [2024-12-16T01:37:36.840Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87036' 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 87036 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 87036 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:06.182 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:06.441 rmmod nvme_tcp 00:16:06.441 rmmod nvme_fabrics 00:16:06.441 rmmod nvme_keyring 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 87004 ']' 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 87004 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 87004 ']' 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 87004 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87004 00:16:06.441 killing process with pid 87004 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87004' 00:16:06.441 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 87004 00:16:06.442 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 87004 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:06.442 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:06.700 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:06.700 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:06.700 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:06.700 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tFqdnZXmhj /tmp/tmp.Ivm3DZtqMf /tmp/tmp.2AKhlA2Ph1 00:16:06.701 ************************************ 00:16:06.701 END TEST nvmf_tls 00:16:06.701 ************************************ 00:16:06.701 00:16:06.701 real 1m20.430s 00:16:06.701 user 2m12.487s 00:16:06.701 sys 0m25.957s 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.701 ************************************ 00:16:06.701 START TEST nvmf_fips 00:16:06.701 ************************************ 00:16:06.701 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:06.960 * Looking for test storage... 00:16:06.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:06.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.961 --rc genhtml_branch_coverage=1 00:16:06.961 --rc genhtml_function_coverage=1 00:16:06.961 --rc genhtml_legend=1 00:16:06.961 --rc geninfo_all_blocks=1 00:16:06.961 --rc geninfo_unexecuted_blocks=1 00:16:06.961 00:16:06.961 ' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:06.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.961 --rc genhtml_branch_coverage=1 00:16:06.961 --rc genhtml_function_coverage=1 00:16:06.961 --rc genhtml_legend=1 00:16:06.961 --rc geninfo_all_blocks=1 00:16:06.961 --rc geninfo_unexecuted_blocks=1 00:16:06.961 00:16:06.961 ' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:06.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.961 --rc genhtml_branch_coverage=1 00:16:06.961 --rc genhtml_function_coverage=1 00:16:06.961 --rc genhtml_legend=1 00:16:06.961 --rc geninfo_all_blocks=1 00:16:06.961 --rc geninfo_unexecuted_blocks=1 00:16:06.961 00:16:06.961 ' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:06.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.961 --rc genhtml_branch_coverage=1 00:16:06.961 --rc genhtml_function_coverage=1 00:16:06.961 --rc genhtml_legend=1 00:16:06.961 --rc geninfo_all_blocks=1 00:16:06.961 --rc geninfo_unexecuted_blocks=1 00:16:06.961 00:16:06.961 ' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.961 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:06.961 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:07.221 Error setting digest 00:16:07.221 40021602077F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:07.221 40021602077F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:07.221 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:07.222 Cannot find device "nvmf_init_br" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:07.222 Cannot find device "nvmf_init_br2" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:07.222 Cannot find device "nvmf_tgt_br" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:07.222 Cannot find device "nvmf_tgt_br2" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:07.222 Cannot find device "nvmf_init_br" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:07.222 Cannot find device "nvmf_init_br2" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:07.222 Cannot find device "nvmf_tgt_br" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:07.222 Cannot find device "nvmf_tgt_br2" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:07.222 Cannot find device "nvmf_br" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:07.222 Cannot find device "nvmf_init_if" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:07.222 Cannot find device "nvmf_init_if2" 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:07.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:07.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:07.222 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:07.481 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:07.481 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:07.481 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:16:07.481 00:16:07.481 --- 10.0.0.3 ping statistics --- 00:16:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.481 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:07.481 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:07.481 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:16:07.481 00:16:07.481 --- 10.0.0.4 ping statistics --- 00:16:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.481 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:16:07.481 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:07.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:16:07.481 00:16:07.481 --- 10.0.0.1 ping statistics --- 00:16:07.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.481 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:07.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:16:07.482 00:16:07.482 --- 10.0.0.2 ping statistics --- 00:16:07.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.482 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=87354 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 87354 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 87354 ']' 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.482 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:07.741 [2024-12-16 01:37:38.221527] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:07.741 [2024-12-16 01:37:38.221634] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:07.741 [2024-12-16 01:37:38.370460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.741 [2024-12-16 01:37:38.389112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:07.741 [2024-12-16 01:37:38.389181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:07.741 [2024-12-16 01:37:38.389207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:07.741 [2024-12-16 01:37:38.389215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:07.741 [2024-12-16 01:37:38.389221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:07.741 [2024-12-16 01:37:38.389546] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.000 [2024-12-16 01:37:38.418763] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Fo3 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Fo3 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Fo3 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Fo3 00:16:08.000 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:08.259 [2024-12-16 01:37:38.814963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.259 [2024-12-16 01:37:38.830917] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:08.259 [2024-12-16 01:37:38.831100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:08.259 malloc0 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=87381 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 87381 /var/tmp/bdevperf.sock 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 87381 ']' 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:08.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:08.259 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:08.518 [2024-12-16 01:37:38.979816] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:08.518 [2024-12-16 01:37:38.979911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87381 ] 00:16:08.518 [2024-12-16 01:37:39.136552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.518 [2024-12-16 01:37:39.160712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.778 [2024-12-16 01:37:39.194678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:08.778 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.778 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:08.778 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Fo3 00:16:09.036 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:09.294 [2024-12-16 01:37:39.717759] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.294 TLSTESTn1 00:16:09.294 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:09.294 Running I/O for 10 seconds... 00:16:11.608 4495.00 IOPS, 17.56 MiB/s [2024-12-16T01:37:43.219Z] 4591.00 IOPS, 17.93 MiB/s [2024-12-16T01:37:44.167Z] 4611.00 IOPS, 18.01 MiB/s [2024-12-16T01:37:45.105Z] 4614.50 IOPS, 18.03 MiB/s [2024-12-16T01:37:46.041Z] 4634.40 IOPS, 18.10 MiB/s [2024-12-16T01:37:46.978Z] 4645.50 IOPS, 18.15 MiB/s [2024-12-16T01:37:48.355Z] 4656.86 IOPS, 18.19 MiB/s [2024-12-16T01:37:49.292Z] 4637.00 IOPS, 18.11 MiB/s [2024-12-16T01:37:50.227Z] 4623.44 IOPS, 18.06 MiB/s [2024-12-16T01:37:50.227Z] 4615.40 IOPS, 18.03 MiB/s 00:16:19.569 Latency(us) 00:16:19.569 [2024-12-16T01:37:50.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.569 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:19.569 Verification LBA range: start 0x0 length 0x2000 00:16:19.569 TLSTESTn1 : 10.02 4620.60 18.05 0.00 0.00 27651.54 5540.77 28835.84 00:16:19.569 [2024-12-16T01:37:50.227Z] =================================================================================================================== 00:16:19.569 [2024-12-16T01:37:50.227Z] Total : 4620.60 18.05 0.00 0.00 27651.54 5540.77 28835.84 00:16:19.569 { 00:16:19.569 "results": [ 00:16:19.569 { 00:16:19.569 "job": "TLSTESTn1", 00:16:19.569 "core_mask": "0x4", 00:16:19.569 "workload": "verify", 00:16:19.569 "status": "finished", 00:16:19.569 "verify_range": { 00:16:19.569 "start": 0, 00:16:19.569 "length": 8192 00:16:19.569 }, 00:16:19.569 "queue_depth": 128, 00:16:19.569 "io_size": 4096, 00:16:19.569 "runtime": 10.016013, 00:16:19.569 "iops": 4620.60103156815, 00:16:19.569 "mibps": 18.049222779563085, 00:16:19.569 "io_failed": 0, 00:16:19.569 "io_timeout": 0, 00:16:19.569 "avg_latency_us": 27651.536328121314, 00:16:19.569 "min_latency_us": 5540.770909090909, 00:16:19.569 "max_latency_us": 28835.84 00:16:19.569 } 00:16:19.569 ], 00:16:19.569 "core_count": 1 00:16:19.569 } 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:19.569 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:19.569 nvmf_trace.0 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 87381 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 87381 ']' 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 87381 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87381 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:19.569 killing process with pid 87381 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87381' 00:16:19.569 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 87381 00:16:19.569 Received shutdown signal, test time was about 10.000000 seconds 00:16:19.569 00:16:19.569 Latency(us) 00:16:19.569 [2024-12-16T01:37:50.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.569 [2024-12-16T01:37:50.227Z] =================================================================================================================== 00:16:19.569 [2024-12-16T01:37:50.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.570 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 87381 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:19.827 rmmod nvme_tcp 00:16:19.827 rmmod nvme_fabrics 00:16:19.827 rmmod nvme_keyring 00:16:19.827 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 87354 ']' 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 87354 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 87354 ']' 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 87354 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87354 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87354' 00:16:19.828 killing process with pid 87354 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 87354 00:16:19.828 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 87354 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Fo3 00:16:20.086 ************************************ 00:16:20.086 END TEST nvmf_fips 00:16:20.086 ************************************ 00:16:20.086 00:16:20.086 real 0m13.403s 00:16:20.086 user 0m18.474s 00:16:20.086 sys 0m5.387s 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.086 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.345 ************************************ 00:16:20.345 START TEST nvmf_control_msg_list 00:16:20.345 ************************************ 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:16:20.345 * Looking for test storage... 00:16:20.345 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.345 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.604 --rc genhtml_branch_coverage=1 00:16:20.604 --rc genhtml_function_coverage=1 00:16:20.604 --rc genhtml_legend=1 00:16:20.604 --rc geninfo_all_blocks=1 00:16:20.604 --rc geninfo_unexecuted_blocks=1 00:16:20.604 00:16:20.604 ' 00:16:20.604 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.605 --rc genhtml_branch_coverage=1 00:16:20.605 --rc genhtml_function_coverage=1 00:16:20.605 --rc genhtml_legend=1 00:16:20.605 --rc geninfo_all_blocks=1 00:16:20.605 --rc geninfo_unexecuted_blocks=1 00:16:20.605 00:16:20.605 ' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.605 --rc genhtml_branch_coverage=1 00:16:20.605 --rc genhtml_function_coverage=1 00:16:20.605 --rc genhtml_legend=1 00:16:20.605 --rc geninfo_all_blocks=1 00:16:20.605 --rc geninfo_unexecuted_blocks=1 00:16:20.605 00:16:20.605 ' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.605 --rc genhtml_branch_coverage=1 00:16:20.605 --rc genhtml_function_coverage=1 00:16:20.605 --rc genhtml_legend=1 00:16:20.605 --rc geninfo_all_blocks=1 00:16:20.605 --rc geninfo_unexecuted_blocks=1 00:16:20.605 00:16:20.605 ' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.605 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:20.605 Cannot find device "nvmf_init_br" 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:20.605 Cannot find device "nvmf_init_br2" 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:20.605 Cannot find device "nvmf_tgt_br" 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:20.605 Cannot find device "nvmf_tgt_br2" 00:16:20.605 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:20.606 Cannot find device "nvmf_init_br" 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:20.606 Cannot find device "nvmf_init_br2" 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:20.606 Cannot find device "nvmf_tgt_br" 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:20.606 Cannot find device "nvmf_tgt_br2" 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:20.606 Cannot find device "nvmf_br" 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:20.606 Cannot find device "nvmf_init_if" 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:20.606 Cannot find device "nvmf_init_if2" 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:20.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:20.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.606 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:20.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:16:20.865 00:16:20.865 --- 10.0.0.3 ping statistics --- 00:16:20.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.865 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:20.865 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:20.865 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:16:20.865 00:16:20.865 --- 10.0.0.4 ping statistics --- 00:16:20.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.865 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:20.865 00:16:20.865 --- 10.0.0.1 ping statistics --- 00:16:20.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.865 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:20.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:16:20.865 00:16:20.865 --- 10.0.0.2 ping statistics --- 00:16:20.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.865 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=87773 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 87773 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 87773 ']' 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:20.865 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.866 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.866 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.866 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.866 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:20.866 [2024-12-16 01:37:51.507928] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:20.866 [2024-12-16 01:37:51.508735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.124 [2024-12-16 01:37:51.664628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.124 [2024-12-16 01:37:51.687119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.124 [2024-12-16 01:37:51.687191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.124 [2024-12-16 01:37:51.687215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.124 [2024-12-16 01:37:51.687225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.124 [2024-12-16 01:37:51.687234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.124 [2024-12-16 01:37:51.687620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.124 [2024-12-16 01:37:51.721038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:21.124 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.124 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:16:21.124 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:21.124 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:21.124 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.383 [2024-12-16 01:37:51.813336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.383 Malloc0 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:21.383 [2024-12-16 01:37:51.852778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=87793 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=87794 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=87795 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:21.383 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 87793 00:16:21.641 [2024-12-16 01:37:52.041336] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:21.642 [2024-12-16 01:37:52.041615] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:21.642 [2024-12-16 01:37:52.041805] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:22.579 Initializing NVMe Controllers 00:16:22.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:22.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:16:22.579 Initialization complete. Launching workers. 00:16:22.579 ======================================================== 00:16:22.579 Latency(us) 00:16:22.579 Device Information : IOPS MiB/s Average min max 00:16:22.579 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3579.00 13.98 279.04 170.86 500.55 00:16:22.579 ======================================================== 00:16:22.579 Total : 3579.00 13.98 279.04 170.86 500.55 00:16:22.579 00:16:22.579 Initializing NVMe Controllers 00:16:22.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:22.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:16:22.579 Initialization complete. Launching workers. 00:16:22.579 ======================================================== 00:16:22.579 Latency(us) 00:16:22.579 Device Information : IOPS MiB/s Average min max 00:16:22.579 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3576.00 13.97 279.26 214.18 554.06 00:16:22.579 ======================================================== 00:16:22.579 Total : 3576.00 13.97 279.26 214.18 554.06 00:16:22.579 00:16:22.579 Initializing NVMe Controllers 00:16:22.579 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:22.579 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:16:22.579 Initialization complete. Launching workers. 00:16:22.579 ======================================================== 00:16:22.579 Latency(us) 00:16:22.579 Device Information : IOPS MiB/s Average min max 00:16:22.579 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3574.00 13.96 279.43 225.71 528.32 00:16:22.579 ======================================================== 00:16:22.579 Total : 3574.00 13.96 279.43 225.71 528.32 00:16:22.579 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 87794 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 87795 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:22.579 rmmod nvme_tcp 00:16:22.579 rmmod nvme_fabrics 00:16:22.579 rmmod nvme_keyring 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 87773 ']' 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 87773 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 87773 ']' 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 87773 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87773 00:16:22.579 killing process with pid 87773 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87773' 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 87773 00:16:22.579 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 87773 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:22.838 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:16:23.097 00:16:23.097 real 0m2.812s 00:16:23.097 user 0m4.773s 00:16:23.097 sys 0m1.246s 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.097 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:16:23.097 ************************************ 00:16:23.097 END TEST nvmf_control_msg_list 00:16:23.098 ************************************ 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.098 ************************************ 00:16:23.098 START TEST nvmf_wait_for_buf 00:16:23.098 ************************************ 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:16:23.098 * Looking for test storage... 00:16:23.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:23.098 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:16:23.357 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:23.357 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.358 --rc genhtml_branch_coverage=1 00:16:23.358 --rc genhtml_function_coverage=1 00:16:23.358 --rc genhtml_legend=1 00:16:23.358 --rc geninfo_all_blocks=1 00:16:23.358 --rc geninfo_unexecuted_blocks=1 00:16:23.358 00:16:23.358 ' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.358 --rc genhtml_branch_coverage=1 00:16:23.358 --rc genhtml_function_coverage=1 00:16:23.358 --rc genhtml_legend=1 00:16:23.358 --rc geninfo_all_blocks=1 00:16:23.358 --rc geninfo_unexecuted_blocks=1 00:16:23.358 00:16:23.358 ' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.358 --rc genhtml_branch_coverage=1 00:16:23.358 --rc genhtml_function_coverage=1 00:16:23.358 --rc genhtml_legend=1 00:16:23.358 --rc geninfo_all_blocks=1 00:16:23.358 --rc geninfo_unexecuted_blocks=1 00:16:23.358 00:16:23.358 ' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:23.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.358 --rc genhtml_branch_coverage=1 00:16:23.358 --rc genhtml_function_coverage=1 00:16:23.358 --rc genhtml_legend=1 00:16:23.358 --rc geninfo_all_blocks=1 00:16:23.358 --rc geninfo_unexecuted_blocks=1 00:16:23.358 00:16:23.358 ' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.358 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:23.359 Cannot find device "nvmf_init_br" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:23.359 Cannot find device "nvmf_init_br2" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:23.359 Cannot find device "nvmf_tgt_br" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:23.359 Cannot find device "nvmf_tgt_br2" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:23.359 Cannot find device "nvmf_init_br" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:23.359 Cannot find device "nvmf_init_br2" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:23.359 Cannot find device "nvmf_tgt_br" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:23.359 Cannot find device "nvmf_tgt_br2" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:23.359 Cannot find device "nvmf_br" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:23.359 Cannot find device "nvmf_init_if" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:23.359 Cannot find device "nvmf_init_if2" 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.359 01:37:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:16:23.359 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.359 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:16:23.359 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.359 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.618 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:23.619 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.619 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:23.619 00:16:23.619 --- 10.0.0.3 ping statistics --- 00:16:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.619 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:23.619 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:23.619 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:16:23.619 00:16:23.619 --- 10.0.0.4 ping statistics --- 00:16:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.619 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:23.619 00:16:23.619 --- 10.0.0.1 ping statistics --- 00:16:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.619 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:23.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:23.619 00:16:23.619 --- 10.0.0.2 ping statistics --- 00:16:23.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.619 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:23.619 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:23.877 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:16:23.877 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:23.877 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.877 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:23.877 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=88029 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 88029 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 88029 ']' 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.878 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:23.878 [2024-12-16 01:37:54.332825] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:23.878 [2024-12-16 01:37:54.332905] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.878 [2024-12-16 01:37:54.476921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.878 [2024-12-16 01:37:54.495057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.878 [2024-12-16 01:37:54.495125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.878 [2024-12-16 01:37:54.495150] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.878 [2024-12-16 01:37:54.495157] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.878 [2024-12-16 01:37:54.495164] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.878 [2024-12-16 01:37:54.495436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 [2024-12-16 01:37:54.684826] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 Malloc0 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 [2024-12-16 01:37:54.727052] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:24.137 [2024-12-16 01:37:54.755218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.137 01:37:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:16:24.396 [2024-12-16 01:37:54.950674] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:25.773 Initializing NVMe Controllers 00:16:25.773 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:25.773 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:25.773 Initialization complete. Launching workers. 00:16:25.773 ======================================================== 00:16:25.773 Latency(us) 00:16:25.773 Device Information : IOPS MiB/s Average min max 00:16:25.773 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 508.00 63.50 7928.66 6196.13 11213.20 00:16:25.773 ======================================================== 00:16:25.773 Total : 508.00 63.50 7928.66 6196.13 11213.20 00:16:25.773 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4826 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4826 -eq 0 ]] 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.773 rmmod nvme_tcp 00:16:25.773 rmmod nvme_fabrics 00:16:25.773 rmmod nvme_keyring 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 88029 ']' 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 88029 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 88029 ']' 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 88029 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.773 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88029 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:26.032 killing process with pid 88029 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88029' 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 88029 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 88029 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:26.032 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:26.291 00:16:26.291 real 0m3.140s 00:16:26.291 user 0m2.549s 00:16:26.291 sys 0m0.744s 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:26.291 ************************************ 00:16:26.291 END TEST nvmf_wait_for_buf 00:16:26.291 ************************************ 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.291 01:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.292 01:37:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.292 ************************************ 00:16:26.292 START TEST nvmf_fuzz 00:16:26.292 ************************************ 00:16:26.292 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:26.292 * Looking for test storage... 00:16:26.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.292 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:26.292 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:16:26.292 01:37:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:26.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.552 --rc genhtml_branch_coverage=1 00:16:26.552 --rc genhtml_function_coverage=1 00:16:26.552 --rc genhtml_legend=1 00:16:26.552 --rc geninfo_all_blocks=1 00:16:26.552 --rc geninfo_unexecuted_blocks=1 00:16:26.552 00:16:26.552 ' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:26.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.552 --rc genhtml_branch_coverage=1 00:16:26.552 --rc genhtml_function_coverage=1 00:16:26.552 --rc genhtml_legend=1 00:16:26.552 --rc geninfo_all_blocks=1 00:16:26.552 --rc geninfo_unexecuted_blocks=1 00:16:26.552 00:16:26.552 ' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:26.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.552 --rc genhtml_branch_coverage=1 00:16:26.552 --rc genhtml_function_coverage=1 00:16:26.552 --rc genhtml_legend=1 00:16:26.552 --rc geninfo_all_blocks=1 00:16:26.552 --rc geninfo_unexecuted_blocks=1 00:16:26.552 00:16:26.552 ' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:26.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.552 --rc genhtml_branch_coverage=1 00:16:26.552 --rc genhtml_function_coverage=1 00:16:26.552 --rc genhtml_legend=1 00:16:26.552 --rc geninfo_all_blocks=1 00:16:26.552 --rc geninfo_unexecuted_blocks=1 00:16:26.552 00:16:26.552 ' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.552 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.553 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:26.553 Cannot find device "nvmf_init_br" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:26.553 Cannot find device "nvmf_init_br2" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:26.553 Cannot find device "nvmf_tgt_br" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.553 Cannot find device "nvmf_tgt_br2" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:26.553 Cannot find device "nvmf_init_br" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:26.553 Cannot find device "nvmf_init_br2" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:26.553 Cannot find device "nvmf_tgt_br" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:26.553 Cannot find device "nvmf_tgt_br2" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:26.553 Cannot find device "nvmf_br" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:26.553 Cannot find device "nvmf_init_if" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:26.553 Cannot find device "nvmf_init_if2" 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:16:26.553 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:26.814 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.814 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:16:26.814 00:16:26.814 --- 10.0.0.3 ping statistics --- 00:16:26.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.814 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:26.814 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:26.814 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:16:26.814 00:16:26.814 --- 10.0.0.4 ping statistics --- 00:16:26.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.814 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:26.814 00:16:26.814 --- 10.0.0.1 ping statistics --- 00:16:26.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.814 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:26.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:16:26.814 00:16:26.814 --- 10.0.0.2 ping statistics --- 00:16:26.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.814 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:26.814 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=88291 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 88291 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 88291 ']' 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.815 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.432 Malloc0 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:16:27.432 01:37:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:16:27.432 Shutting down the fuzz application 00:16:27.691 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:16:27.691 Shutting down the fuzz application 00:16:27.691 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.691 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.691 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:27.950 rmmod nvme_tcp 00:16:27.950 rmmod nvme_fabrics 00:16:27.950 rmmod nvme_keyring 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 88291 ']' 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 88291 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 88291 ']' 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 88291 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88291 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.950 killing process with pid 88291 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88291' 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 88291 00:16:27.950 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 88291 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.210 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:16:28.470 00:16:28.470 real 0m2.031s 00:16:28.470 user 0m1.675s 00:16:28.470 sys 0m0.646s 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:28.470 ************************************ 00:16:28.470 END TEST nvmf_fuzz 00:16:28.470 ************************************ 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:28.470 ************************************ 00:16:28.470 START TEST nvmf_multiconnection 00:16:28.470 ************************************ 00:16:28.470 01:37:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:28.470 * Looking for test storage... 00:16:28.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:28.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.470 --rc genhtml_branch_coverage=1 00:16:28.470 --rc genhtml_function_coverage=1 00:16:28.470 --rc genhtml_legend=1 00:16:28.470 --rc geninfo_all_blocks=1 00:16:28.470 --rc geninfo_unexecuted_blocks=1 00:16:28.470 00:16:28.470 ' 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:28.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.470 --rc genhtml_branch_coverage=1 00:16:28.470 --rc genhtml_function_coverage=1 00:16:28.470 --rc genhtml_legend=1 00:16:28.470 --rc geninfo_all_blocks=1 00:16:28.470 --rc geninfo_unexecuted_blocks=1 00:16:28.470 00:16:28.470 ' 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:28.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.470 --rc genhtml_branch_coverage=1 00:16:28.470 --rc genhtml_function_coverage=1 00:16:28.470 --rc genhtml_legend=1 00:16:28.470 --rc geninfo_all_blocks=1 00:16:28.470 --rc geninfo_unexecuted_blocks=1 00:16:28.470 00:16:28.470 ' 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:28.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:28.470 --rc genhtml_branch_coverage=1 00:16:28.470 --rc genhtml_function_coverage=1 00:16:28.470 --rc genhtml_legend=1 00:16:28.470 --rc geninfo_all_blocks=1 00:16:28.470 --rc geninfo_unexecuted_blocks=1 00:16:28.470 00:16:28.470 ' 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:28.470 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.730 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:28.730 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:28.731 Cannot find device "nvmf_init_br" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:28.731 Cannot find device "nvmf_init_br2" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:28.731 Cannot find device "nvmf_tgt_br" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:28.731 Cannot find device "nvmf_tgt_br2" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:28.731 Cannot find device "nvmf_init_br" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:28.731 Cannot find device "nvmf_init_br2" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:28.731 Cannot find device "nvmf_tgt_br" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:28.731 Cannot find device "nvmf_tgt_br2" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:28.731 Cannot find device "nvmf_br" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:28.731 Cannot find device "nvmf_init_if" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:28.731 Cannot find device "nvmf_init_if2" 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:28.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:28.731 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:28.731 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:28.990 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:28.991 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:28.991 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:16:28.991 00:16:28.991 --- 10.0.0.3 ping statistics --- 00:16:28.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.991 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:28.991 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:28.991 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:16:28.991 00:16:28.991 --- 10.0.0.4 ping statistics --- 00:16:28.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.991 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:28.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:28.991 00:16:28.991 --- 10.0.0.1 ping statistics --- 00:16:28.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.991 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:28.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:16:28.991 00:16:28.991 --- 10.0.0.2 ping statistics --- 00:16:28.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.991 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=88524 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 88524 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 88524 ']' 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.991 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:28.991 [2024-12-16 01:37:59.629469] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:28.991 [2024-12-16 01:37:59.629559] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.250 [2024-12-16 01:37:59.772877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.250 [2024-12-16 01:37:59.792711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.250 [2024-12-16 01:37:59.792776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.250 [2024-12-16 01:37:59.792786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.250 [2024-12-16 01:37:59.792793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.250 [2024-12-16 01:37:59.792799] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.250 [2024-12-16 01:37:59.793450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.250 [2024-12-16 01:37:59.793568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.250 [2024-12-16 01:37:59.794340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.250 [2024-12-16 01:37:59.794387] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.250 [2024-12-16 01:37:59.822585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:29.250 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.250 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:16:29.250 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.250 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.250 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 [2024-12-16 01:37:59.948005] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 Malloc1 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.509 01:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 [2024-12-16 01:38:00.013034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.509 Malloc2 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.509 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 Malloc3 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 Malloc4 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.510 Malloc5 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.510 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.769 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.769 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:29.769 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.769 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 Malloc6 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 Malloc7 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 Malloc8 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 Malloc9 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 Malloc10 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.770 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:29.770 Malloc11 00:16:29.771 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.771 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:16:29.771 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.771 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:30.029 01:38:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:32.560 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:34.460 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:36.361 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:36.687 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:36.687 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:36.687 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:36.687 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:36.687 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:38.588 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:41.118 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:43.019 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:43.020 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:43.020 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.020 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.020 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.020 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:44.936 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:45.194 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:45.194 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:45.194 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.194 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:45.194 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:47.095 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:47.095 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:47.095 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:16:47.095 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:47.095 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.095 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:47.095 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:47.096 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:47.354 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:47.354 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:47.354 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.354 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:47.354 01:38:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:49.257 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:49.516 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:49.516 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:49.516 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:49.516 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:49.516 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:51.421 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:51.421 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:51.421 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:51.679 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:54.210 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:54.210 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:54.210 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:16:54.210 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:54.210 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.210 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:54.210 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:54.210 [global] 00:16:54.210 thread=1 00:16:54.210 invalidate=1 00:16:54.210 rw=read 00:16:54.210 time_based=1 00:16:54.210 runtime=10 00:16:54.210 ioengine=libaio 00:16:54.210 direct=1 00:16:54.210 bs=262144 00:16:54.210 iodepth=64 00:16:54.210 norandommap=1 00:16:54.210 numjobs=1 00:16:54.210 00:16:54.210 [job0] 00:16:54.210 filename=/dev/nvme0n1 00:16:54.210 [job1] 00:16:54.210 filename=/dev/nvme10n1 00:16:54.210 [job2] 00:16:54.210 filename=/dev/nvme1n1 00:16:54.210 [job3] 00:16:54.210 filename=/dev/nvme2n1 00:16:54.210 [job4] 00:16:54.210 filename=/dev/nvme3n1 00:16:54.210 [job5] 00:16:54.210 filename=/dev/nvme4n1 00:16:54.210 [job6] 00:16:54.210 filename=/dev/nvme5n1 00:16:54.210 [job7] 00:16:54.210 filename=/dev/nvme6n1 00:16:54.210 [job8] 00:16:54.210 filename=/dev/nvme7n1 00:16:54.210 [job9] 00:16:54.210 filename=/dev/nvme8n1 00:16:54.210 [job10] 00:16:54.210 filename=/dev/nvme9n1 00:16:54.210 Could not set queue depth (nvme0n1) 00:16:54.210 Could not set queue depth (nvme10n1) 00:16:54.210 Could not set queue depth (nvme1n1) 00:16:54.210 Could not set queue depth (nvme2n1) 00:16:54.210 Could not set queue depth (nvme3n1) 00:16:54.211 Could not set queue depth (nvme4n1) 00:16:54.211 Could not set queue depth (nvme5n1) 00:16:54.211 Could not set queue depth (nvme6n1) 00:16:54.211 Could not set queue depth (nvme7n1) 00:16:54.211 Could not set queue depth (nvme8n1) 00:16:54.211 Could not set queue depth (nvme9n1) 00:16:54.211 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:54.211 fio-3.35 00:16:54.211 Starting 11 threads 00:17:06.419 00:17:06.419 job0: (groupid=0, jobs=1): err= 0: pid=88970: Mon Dec 16 01:38:35 2024 00:17:06.419 read: IOPS=70, BW=17.7MiB/s (18.5MB/s)(180MiB/10183msec) 00:17:06.419 slat (usec): min=20, max=392200, avg=14043.05, stdev=43464.97 00:17:06.419 clat (msec): min=51, max=1260, avg=889.83, stdev=259.33 00:17:06.419 lat (msec): min=51, max=1383, avg=903.87, stdev=261.63 00:17:06.419 clat percentiles (msec): 00:17:06.419 | 1.00th=[ 53], 5.00th=[ 186], 10.00th=[ 451], 20.00th=[ 768], 00:17:06.419 | 30.00th=[ 844], 40.00th=[ 944], 50.00th=[ 986], 60.00th=[ 1011], 00:17:06.419 | 70.00th=[ 1036], 80.00th=[ 1070], 90.00th=[ 1116], 95.00th=[ 1150], 00:17:06.419 | 99.00th=[ 1200], 99.50th=[ 1234], 99.90th=[ 1267], 99.95th=[ 1267], 00:17:06.419 | 99.99th=[ 1267] 00:17:06.419 bw ( KiB/s): min= 5120, max=32256, per=1.96%, avg=16791.40, stdev=7476.50, samples=20 00:17:06.419 iops : min= 20, max= 126, avg=65.55, stdev=29.18, samples=20 00:17:06.419 lat (msec) : 100=2.08%, 250=3.47%, 500=5.69%, 750=4.72%, 1000=41.53% 00:17:06.419 lat (msec) : 2000=42.50% 00:17:06.419 cpu : usr=0.02%, sys=0.41%, ctx=139, majf=0, minf=4097 00:17:06.419 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:17:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.419 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:17:06.419 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.419 job1: (groupid=0, jobs=1): err= 0: pid=88971: Mon Dec 16 01:38:35 2024 00:17:06.419 read: IOPS=246, BW=61.5MiB/s (64.5MB/s)(621MiB/10083msec) 00:17:06.419 slat (usec): min=20, max=150450, avg=4034.74, stdev=9794.57 00:17:06.419 clat (msec): min=29, max=360, avg=255.51, stdev=39.49 00:17:06.419 lat (msec): min=31, max=361, avg=259.55, stdev=39.44 00:17:06.419 clat percentiles (msec): 00:17:06.419 | 1.00th=[ 68], 5.00th=[ 182], 10.00th=[ 218], 20.00th=[ 241], 00:17:06.419 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 264], 00:17:06.419 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 313], 00:17:06.419 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 363], 99.95th=[ 363], 00:17:06.419 | 99.99th=[ 363] 00:17:06.419 bw ( KiB/s): min=52736, max=74240, per=7.23%, avg=61906.35, stdev=4200.47, samples=20 00:17:06.419 iops : min= 206, max= 290, avg=241.70, stdev=16.45, samples=20 00:17:06.419 lat (msec) : 50=0.52%, 100=0.64%, 250=30.26%, 500=68.57% 00:17:06.419 cpu : usr=0.14%, sys=1.13%, ctx=529, majf=0, minf=4097 00:17:06.419 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:06.419 issued rwts: total=2482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.419 job2: (groupid=0, jobs=1): err= 0: pid=88972: Mon Dec 16 01:38:35 2024 00:17:06.419 read: IOPS=67, BW=16.9MiB/s (17.7MB/s)(172MiB/10172msec) 00:17:06.419 slat (usec): min=27, max=378419, avg=14634.04, stdev=44765.30 00:17:06.419 clat (msec): min=84, max=1277, avg=932.95, stdev=270.48 00:17:06.419 lat (msec): min=84, max=1339, avg=947.59, stdev=271.43 00:17:06.419 clat percentiles (msec): 00:17:06.419 | 1.00th=[ 91], 5.00th=[ 338], 10.00th=[ 642], 20.00th=[ 785], 00:17:06.419 | 30.00th=[ 844], 40.00th=[ 894], 50.00th=[ 969], 60.00th=[ 1083], 00:17:06.419 | 70.00th=[ 1116], 80.00th=[ 1183], 90.00th=[ 1217], 95.00th=[ 1234], 00:17:06.419 | 99.00th=[ 1267], 99.50th=[ 1284], 99.90th=[ 1284], 99.95th=[ 1284], 00:17:06.419 | 99.99th=[ 1284] 00:17:06.419 bw ( KiB/s): min= 5109, max=27648, per=1.86%, avg=15921.80, stdev=7426.83, samples=20 00:17:06.419 iops : min= 19, max= 108, avg=62.05, stdev=29.18, samples=20 00:17:06.419 lat (msec) : 100=1.75%, 250=1.75%, 500=5.10%, 750=9.33%, 1000=35.42% 00:17:06.419 lat (msec) : 2000=46.65% 00:17:06.419 cpu : usr=0.05%, sys=0.32%, ctx=113, majf=0, minf=4097 00:17:06.419 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:17:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.419 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:17:06.419 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.419 job3: (groupid=0, jobs=1): err= 0: pid=88973: Mon Dec 16 01:38:35 2024 00:17:06.419 read: IOPS=67, BW=17.0MiB/s (17.8MB/s)(173MiB/10168msec) 00:17:06.419 slat (usec): min=22, max=518180, avg=14572.30, stdev=48536.14 00:17:06.419 clat (msec): min=85, max=1424, avg=927.19, stdev=224.24 00:17:06.419 lat (msec): min=85, max=1424, avg=941.76, stdev=225.39 00:17:06.419 clat percentiles (msec): 00:17:06.419 | 1.00th=[ 87], 5.00th=[ 584], 10.00th=[ 701], 20.00th=[ 802], 00:17:06.419 | 30.00th=[ 844], 40.00th=[ 902], 50.00th=[ 944], 60.00th=[ 995], 00:17:06.419 | 70.00th=[ 1028], 80.00th=[ 1099], 90.00th=[ 1183], 95.00th=[ 1267], 00:17:06.419 | 99.00th=[ 1334], 99.50th=[ 1351], 99.90th=[ 1418], 99.95th=[ 1418], 00:17:06.419 | 99.99th=[ 1418] 00:17:06.419 bw ( KiB/s): min= 5120, max=31744, per=1.84%, avg=15735.84, stdev=8633.11, samples=19 00:17:06.419 iops : min= 20, max= 124, avg=61.37, stdev=33.84, samples=19 00:17:06.419 lat (msec) : 100=1.74%, 250=1.45%, 500=0.72%, 750=8.99%, 1000=49.42% 00:17:06.419 lat (msec) : 2000=37.68% 00:17:06.419 cpu : usr=0.03%, sys=0.40%, ctx=109, majf=0, minf=4097 00:17:06.419 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:17:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.419 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:17:06.419 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.419 job4: (groupid=0, jobs=1): err= 0: pid=88974: Mon Dec 16 01:38:35 2024 00:17:06.419 read: IOPS=249, BW=62.3MiB/s (65.4MB/s)(629MiB/10085msec) 00:17:06.419 slat (usec): min=20, max=72483, avg=3975.15, stdev=9633.40 00:17:06.419 clat (msec): min=19, max=321, avg=252.30, stdev=37.78 00:17:06.419 lat (msec): min=20, max=337, avg=256.28, stdev=37.75 00:17:06.419 clat percentiles (msec): 00:17:06.419 | 1.00th=[ 111], 5.00th=[ 174], 10.00th=[ 209], 20.00th=[ 234], 00:17:06.419 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 266], 00:17:06.419 | 70.00th=[ 271], 80.00th=[ 279], 90.00th=[ 292], 95.00th=[ 296], 00:17:06.419 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:17:06.419 | 99.99th=[ 321] 00:17:06.419 bw ( KiB/s): min=57344, max=71680, per=7.33%, avg=62770.80, stdev=3314.01, samples=20 00:17:06.419 iops : min= 224, max= 280, avg=245.10, stdev=12.93, samples=20 00:17:06.419 lat (msec) : 20=0.04%, 100=0.91%, 250=35.23%, 500=63.82% 00:17:06.419 cpu : usr=0.10%, sys=0.96%, ctx=564, majf=0, minf=4097 00:17:06.419 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:06.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:06.420 issued rwts: total=2515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.420 job5: (groupid=0, jobs=1): err= 0: pid=88975: Mon Dec 16 01:38:35 2024 00:17:06.420 read: IOPS=1296, BW=324MiB/s (340MB/s)(3249MiB/10022msec) 00:17:06.420 slat (usec): min=20, max=15344, avg=765.06, stdev=1683.05 00:17:06.420 clat (usec): min=19798, max=71998, avg=48554.66, stdev=3968.48 00:17:06.420 lat (usec): min=21910, max=72030, avg=49319.72, stdev=3885.41 00:17:06.420 clat percentiles (usec): 00:17:06.420 | 1.00th=[36963], 5.00th=[42206], 10.00th=[44303], 20.00th=[45876], 00:17:06.420 | 30.00th=[47449], 40.00th=[48497], 50.00th=[49021], 60.00th=[49546], 00:17:06.420 | 70.00th=[50594], 80.00th=[51643], 90.00th=[52691], 95.00th=[53740], 00:17:06.420 | 99.00th=[55837], 99.50th=[56361], 99.90th=[63701], 99.95th=[67634], 00:17:06.420 | 99.99th=[71828] 00:17:06.420 bw ( KiB/s): min=313344, max=355328, per=38.65%, avg=331077.25, stdev=10010.00, samples=20 00:17:06.420 iops : min= 1224, max= 1388, avg=1293.25, stdev=39.10, samples=20 00:17:06.420 lat (msec) : 20=0.01%, 50=64.78%, 100=35.22% 00:17:06.420 cpu : usr=0.45%, sys=4.58%, ctx=2445, majf=0, minf=4097 00:17:06.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:06.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:06.420 issued rwts: total=12997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.420 job6: (groupid=0, jobs=1): err= 0: pid=88976: Mon Dec 16 01:38:35 2024 00:17:06.420 read: IOPS=72, BW=18.0MiB/s (18.9MB/s)(183MiB/10161msec) 00:17:06.420 slat (usec): min=21, max=286517, avg=13669.71, stdev=39981.08 00:17:06.420 clat (msec): min=136, max=1161, avg=872.50, stdev=189.44 00:17:06.420 lat (msec): min=143, max=1161, avg=886.17, stdev=190.42 00:17:06.420 clat percentiles (msec): 00:17:06.420 | 1.00th=[ 271], 5.00th=[ 384], 10.00th=[ 667], 20.00th=[ 802], 00:17:06.420 | 30.00th=[ 827], 40.00th=[ 877], 50.00th=[ 911], 60.00th=[ 953], 00:17:06.420 | 70.00th=[ 986], 80.00th=[ 1003], 90.00th=[ 1053], 95.00th=[ 1083], 00:17:06.420 | 99.00th=[ 1116], 99.50th=[ 1116], 99.90th=[ 1167], 99.95th=[ 1167], 00:17:06.420 | 99.99th=[ 1167] 00:17:06.420 bw ( KiB/s): min=13312, max=20992, per=2.00%, avg=17128.45, stdev=2360.98, samples=20 00:17:06.420 iops : min= 52, max= 82, avg=66.85, stdev= 9.15, samples=20 00:17:06.420 lat (msec) : 250=0.82%, 500=5.32%, 750=7.09%, 1000=62.07%, 2000=24.69% 00:17:06.420 cpu : usr=0.01%, sys=0.38%, ctx=130, majf=0, minf=4097 00:17:06.420 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:17:06.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.420 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:06.420 issued rwts: total=733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.420 job7: (groupid=0, jobs=1): err= 0: pid=88977: Mon Dec 16 01:38:35 2024 00:17:06.420 read: IOPS=69, BW=17.4MiB/s (18.2MB/s)(177MiB/10170msec) 00:17:06.420 slat (usec): min=20, max=387499, avg=14179.30, stdev=45428.77 00:17:06.420 clat (msec): min=140, max=1287, avg=904.82, stdev=200.31 00:17:06.420 lat (msec): min=184, max=1287, avg=919.00, stdev=201.03 00:17:06.420 clat percentiles (msec): 00:17:06.420 | 1.00th=[ 292], 5.00th=[ 388], 10.00th=[ 735], 20.00th=[ 818], 00:17:06.420 | 30.00th=[ 877], 40.00th=[ 927], 50.00th=[ 961], 60.00th=[ 986], 00:17:06.420 | 70.00th=[ 1011], 80.00th=[ 1028], 90.00th=[ 1099], 95.00th=[ 1116], 00:17:06.420 | 99.00th=[ 1167], 99.50th=[ 1183], 99.90th=[ 1284], 99.95th=[ 1284], 00:17:06.420 | 99.99th=[ 1284] 00:17:06.420 bw ( KiB/s): min= 8704, max=25088, per=1.92%, avg=16455.50, stdev=4421.93, samples=20 00:17:06.420 iops : min= 34, max= 98, avg=64.15, stdev=17.25, samples=20 00:17:06.420 lat (msec) : 250=0.99%, 500=6.79%, 750=4.10%, 1000=54.31%, 2000=33.80% 00:17:06.420 cpu : usr=0.01%, sys=0.41%, ctx=115, majf=0, minf=4097 00:17:06.420 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:17:06.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.420 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:17:06.420 issued rwts: total=707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.420 job8: (groupid=0, jobs=1): err= 0: pid=88979: Mon Dec 16 01:38:35 2024 00:17:06.420 read: IOPS=250, BW=62.6MiB/s (65.6MB/s)(632MiB/10092msec) 00:17:06.420 slat (usec): min=18, max=83416, avg=3950.84, stdev=9518.44 00:17:06.420 clat (msec): min=36, max=336, avg=251.33, stdev=35.68 00:17:06.420 lat (msec): min=36, max=336, avg=255.28, stdev=36.09 00:17:06.420 clat percentiles (msec): 00:17:06.420 | 1.00th=[ 100], 5.00th=[ 188], 10.00th=[ 224], 20.00th=[ 239], 00:17:06.420 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 259], 60.00th=[ 262], 00:17:06.420 | 70.00th=[ 268], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 288], 00:17:06.420 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 330], 99.95th=[ 338], 00:17:06.420 | 99.99th=[ 338] 00:17:06.420 bw ( KiB/s): min=58368, max=70797, per=7.36%, avg=63028.15, stdev=2714.57, samples=20 00:17:06.420 iops : min= 228, max= 276, avg=246.15, stdev=10.54, samples=20 00:17:06.420 lat (msec) : 50=0.48%, 100=0.55%, 250=32.86%, 500=66.11% 00:17:06.420 cpu : usr=0.20%, sys=1.12%, ctx=510, majf=0, minf=4097 00:17:06.420 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:06.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:06.420 issued rwts: total=2526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.420 job9: (groupid=0, jobs=1): err= 0: pid=88980: Mon Dec 16 01:38:35 2024 00:17:06.420 read: IOPS=72, BW=18.2MiB/s (19.1MB/s)(186MiB/10180msec) 00:17:06.420 slat (usec): min=20, max=375214, avg=13156.90, stdev=39424.17 00:17:06.420 clat (msec): min=22, max=1199, avg=863.03, stdev=218.74 00:17:06.420 lat (msec): min=23, max=1199, avg=876.18, stdev=220.63 00:17:06.420 clat percentiles (msec): 00:17:06.420 | 1.00th=[ 188], 5.00th=[ 243], 10.00th=[ 498], 20.00th=[ 802], 00:17:06.420 | 30.00th=[ 835], 40.00th=[ 894], 50.00th=[ 927], 60.00th=[ 961], 00:17:06.420 | 70.00th=[ 995], 80.00th=[ 1011], 90.00th=[ 1036], 95.00th=[ 1062], 00:17:06.420 | 99.00th=[ 1099], 99.50th=[ 1183], 99.90th=[ 1200], 99.95th=[ 1200], 00:17:06.420 | 99.99th=[ 1200] 00:17:06.420 bw ( KiB/s): min=11241, max=24064, per=2.03%, avg=17358.55, stdev=3436.86, samples=20 00:17:06.420 iops : min= 43, max= 94, avg=67.65, stdev=13.46, samples=20 00:17:06.420 lat (msec) : 50=0.27%, 250=5.39%, 500=4.58%, 750=4.18%, 1000=58.09% 00:17:06.420 lat (msec) : 2000=27.49% 00:17:06.420 cpu : usr=0.02%, sys=0.39%, ctx=131, majf=0, minf=4097 00:17:06.420 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:17:06.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.420 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:06.420 issued rwts: total=742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.420 job10: (groupid=0, jobs=1): err= 0: pid=88981: Mon Dec 16 01:38:35 2024 00:17:06.420 read: IOPS=923, BW=231MiB/s (242MB/s)(2318MiB/10039msec) 00:17:06.420 slat (usec): min=20, max=9328, avg=1072.82, stdev=1956.82 00:17:06.420 clat (msec): min=10, max=109, avg=68.10, stdev= 5.38 00:17:06.420 lat (msec): min=10, max=109, avg=69.17, stdev= 5.49 00:17:06.420 clat percentiles (msec): 00:17:06.420 | 1.00th=[ 49], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:17:06.420 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 69], 00:17:06.420 | 70.00th=[ 70], 80.00th=[ 71], 90.00th=[ 73], 95.00th=[ 74], 00:17:06.420 | 99.00th=[ 77], 99.50th=[ 79], 99.90th=[ 94], 99.95th=[ 102], 00:17:06.420 | 99.99th=[ 110] 00:17:06.420 bw ( KiB/s): min=227328, max=244758, per=27.52%, avg=235724.50, stdev=3957.02, samples=20 00:17:06.420 iops : min= 888, max= 956, avg=920.70, stdev=15.34, samples=20 00:17:06.420 lat (msec) : 20=0.30%, 50=0.70%, 100=98.92%, 250=0.08% 00:17:06.420 cpu : usr=0.54%, sys=4.07%, ctx=2319, majf=0, minf=4097 00:17:06.420 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:17:06.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.420 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:06.420 issued rwts: total=9271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.420 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:06.420 00:17:06.420 Run status group 0 (all jobs): 00:17:06.420 READ: bw=836MiB/s (877MB/s), 16.9MiB/s-324MiB/s (17.7MB/s-340MB/s), io=8517MiB (8931MB), run=10022-10183msec 00:17:06.420 00:17:06.420 Disk stats (read/write): 00:17:06.420 nvme0n1: ios=1312/0, merge=0/0, ticks=1181383/0, in_queue=1181383, util=97.80% 00:17:06.420 nvme10n1: ios=4837/0, merge=0/0, ticks=1230446/0, in_queue=1230446, util=97.89% 00:17:06.420 nvme1n1: ios=1245/0, merge=0/0, ticks=1197973/0, in_queue=1197973, util=98.01% 00:17:06.420 nvme2n1: ios=1252/0, merge=0/0, ticks=1188511/0, in_queue=1188511, util=98.03% 00:17:06.420 nvme3n1: ios=4902/0, merge=0/0, ticks=1230231/0, in_queue=1230231, util=98.27% 00:17:06.420 nvme4n1: ios=25928/0, merge=0/0, ticks=1243641/0, in_queue=1243641, util=98.57% 00:17:06.420 nvme5n1: ios=1338/0, merge=0/0, ticks=1175416/0, in_queue=1175416, util=98.43% 00:17:06.420 nvme6n1: ios=1287/0, merge=0/0, ticks=1182937/0, in_queue=1182937, util=98.54% 00:17:06.420 nvme7n1: ios=4924/0, merge=0/0, ticks=1230216/0, in_queue=1230216, util=98.89% 00:17:06.420 nvme8n1: ios=1356/0, merge=0/0, ticks=1187370/0, in_queue=1187370, util=99.02% 00:17:06.420 nvme9n1: ios=18415/0, merge=0/0, ticks=1232783/0, in_queue=1232783, util=99.02% 00:17:06.420 01:38:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:17:06.420 [global] 00:17:06.420 thread=1 00:17:06.420 invalidate=1 00:17:06.420 rw=randwrite 00:17:06.420 time_based=1 00:17:06.420 runtime=10 00:17:06.420 ioengine=libaio 00:17:06.420 direct=1 00:17:06.420 bs=262144 00:17:06.420 iodepth=64 00:17:06.420 norandommap=1 00:17:06.420 numjobs=1 00:17:06.420 00:17:06.420 [job0] 00:17:06.420 filename=/dev/nvme0n1 00:17:06.420 [job1] 00:17:06.420 filename=/dev/nvme10n1 00:17:06.420 [job2] 00:17:06.421 filename=/dev/nvme1n1 00:17:06.421 [job3] 00:17:06.421 filename=/dev/nvme2n1 00:17:06.421 [job4] 00:17:06.421 filename=/dev/nvme3n1 00:17:06.421 [job5] 00:17:06.421 filename=/dev/nvme4n1 00:17:06.421 [job6] 00:17:06.421 filename=/dev/nvme5n1 00:17:06.421 [job7] 00:17:06.421 filename=/dev/nvme6n1 00:17:06.421 [job8] 00:17:06.421 filename=/dev/nvme7n1 00:17:06.421 [job9] 00:17:06.421 filename=/dev/nvme8n1 00:17:06.421 [job10] 00:17:06.421 filename=/dev/nvme9n1 00:17:06.421 Could not set queue depth (nvme0n1) 00:17:06.421 Could not set queue depth (nvme10n1) 00:17:06.421 Could not set queue depth (nvme1n1) 00:17:06.421 Could not set queue depth (nvme2n1) 00:17:06.421 Could not set queue depth (nvme3n1) 00:17:06.421 Could not set queue depth (nvme4n1) 00:17:06.421 Could not set queue depth (nvme5n1) 00:17:06.421 Could not set queue depth (nvme6n1) 00:17:06.421 Could not set queue depth (nvme7n1) 00:17:06.421 Could not set queue depth (nvme8n1) 00:17:06.421 Could not set queue depth (nvme9n1) 00:17:06.421 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:17:06.421 fio-3.35 00:17:06.421 Starting 11 threads 00:17:16.399 00:17:16.399 job0: (groupid=0, jobs=1): err= 0: pid=89181: Mon Dec 16 01:38:45 2024 00:17:16.399 write: IOPS=345, BW=86.4MiB/s (90.6MB/s)(875MiB/10118msec); 0 zone resets 00:17:16.399 slat (usec): min=17, max=85484, avg=2826.65, stdev=5178.39 00:17:16.399 clat (msec): min=87, max=311, avg=182.23, stdev=35.60 00:17:16.399 lat (msec): min=87, max=311, avg=185.05, stdev=35.72 00:17:16.399 clat percentiles (msec): 00:17:16.399 | 1.00th=[ 132], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 150], 00:17:16.399 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 201], 00:17:16.399 | 70.00th=[ 211], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 232], 00:17:16.399 | 99.00th=[ 296], 99.50th=[ 305], 99.90th=[ 313], 99.95th=[ 313], 00:17:16.399 | 99.99th=[ 313] 00:17:16.399 bw ( KiB/s): min=59392, max=108544, per=10.58%, avg=87942.00, stdev=17083.24, samples=20 00:17:16.399 iops : min= 232, max= 424, avg=343.50, stdev=66.77, samples=20 00:17:16.399 lat (msec) : 100=0.17%, 250=96.86%, 500=2.97% 00:17:16.399 cpu : usr=0.62%, sys=1.06%, ctx=4613, majf=0, minf=1 00:17:16.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:16.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.399 issued rwts: total=0,3498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.399 job1: (groupid=0, jobs=1): err= 0: pid=89182: Mon Dec 16 01:38:45 2024 00:17:16.399 write: IOPS=261, BW=65.5MiB/s (68.6MB/s)(668MiB/10210msec); 0 zone resets 00:17:16.399 slat (usec): min=17, max=85419, avg=3657.24, stdev=7077.35 00:17:16.399 clat (msec): min=87, max=581, avg=240.67, stdev=80.94 00:17:16.399 lat (msec): min=87, max=581, avg=244.33, stdev=81.97 00:17:16.399 clat percentiles (msec): 00:17:16.399 | 1.00th=[ 117], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:17:16.399 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 205], 00:17:16.399 | 70.00th=[ 245], 80.00th=[ 355], 90.00th=[ 384], 95.00th=[ 388], 00:17:16.399 | 99.00th=[ 414], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 584], 00:17:16.399 | 99.99th=[ 584] 00:17:16.399 bw ( KiB/s): min=40960, max=94208, per=8.04%, avg=66822.95, stdev=19377.86, samples=20 00:17:16.399 iops : min= 160, max= 368, avg=261.00, stdev=75.69, samples=20 00:17:16.399 lat (msec) : 100=0.41%, 250=71.34%, 500=27.72%, 750=0.52% 00:17:16.399 cpu : usr=0.36%, sys=0.94%, ctx=2423, majf=0, minf=1 00:17:16.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:16.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.399 issued rwts: total=0,2673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.399 job2: (groupid=0, jobs=1): err= 0: pid=89194: Mon Dec 16 01:38:45 2024 00:17:16.399 write: IOPS=341, BW=85.3MiB/s (89.5MB/s)(858MiB/10056msec); 0 zone resets 00:17:16.399 slat (usec): min=16, max=65884, avg=2861.46, stdev=6086.55 00:17:16.399 clat (usec): min=1457, max=330099, avg=184571.15, stdev=100253.05 00:17:16.399 lat (msec): min=2, max=330, avg=187.43, stdev=101.73 00:17:16.399 clat percentiles (msec): 00:17:16.399 | 1.00th=[ 11], 5.00th=[ 57], 10.00th=[ 66], 20.00th=[ 69], 00:17:16.399 | 30.00th=[ 72], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 236], 00:17:16.399 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 309], 95.00th=[ 317], 00:17:16.399 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 330], 99.95th=[ 330], 00:17:16.399 | 99.99th=[ 330] 00:17:16.399 bw ( KiB/s): min=51200, max=232960, per=10.38%, avg=86272.00, stdev=58029.11, samples=20 00:17:16.399 iops : min= 200, max= 910, avg=337.00, stdev=226.68, samples=20 00:17:16.399 lat (msec) : 2=0.03%, 4=0.09%, 10=0.79%, 20=1.05%, 50=2.59% 00:17:16.399 lat (msec) : 100=29.33%, 250=29.86%, 500=36.27% 00:17:16.399 cpu : usr=0.62%, sys=1.02%, ctx=2877, majf=0, minf=1 00:17:16.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:16.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.399 issued rwts: total=0,3433,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.399 job3: (groupid=0, jobs=1): err= 0: pid=89195: Mon Dec 16 01:38:45 2024 00:17:16.399 write: IOPS=344, BW=86.2MiB/s (90.3MB/s)(872MiB/10118msec); 0 zone resets 00:17:16.399 slat (usec): min=17, max=121322, avg=2836.48, stdev=5375.32 00:17:16.399 clat (msec): min=110, max=334, avg=182.77, stdev=36.13 00:17:16.399 lat (msec): min=119, max=348, avg=185.61, stdev=36.27 00:17:16.399 clat percentiles (msec): 00:17:16.399 | 1.00th=[ 134], 5.00th=[ 142], 10.00th=[ 148], 20.00th=[ 150], 00:17:16.399 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 201], 00:17:16.399 | 70.00th=[ 211], 80.00th=[ 215], 90.00th=[ 220], 95.00th=[ 232], 00:17:16.399 | 99.00th=[ 300], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 334], 00:17:16.399 | 99.99th=[ 334] 00:17:16.399 bw ( KiB/s): min=54784, max=110592, per=10.54%, avg=87654.40, stdev=17778.62, samples=20 00:17:16.399 iops : min= 214, max= 432, avg=342.40, stdev=69.45, samples=20 00:17:16.399 lat (msec) : 250=96.73%, 500=3.27% 00:17:16.399 cpu : usr=0.63%, sys=1.06%, ctx=4085, majf=0, minf=1 00:17:16.399 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:17:16.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.399 issued rwts: total=0,3487,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.399 job4: (groupid=0, jobs=1): err= 0: pid=89196: Mon Dec 16 01:38:45 2024 00:17:16.399 write: IOPS=247, BW=61.8MiB/s (64.8MB/s)(631MiB/10221msec); 0 zone resets 00:17:16.399 slat (usec): min=17, max=34779, avg=3892.90, stdev=7285.40 00:17:16.399 clat (msec): min=14, max=581, avg=255.06, stdev=79.06 00:17:16.399 lat (msec): min=14, max=581, avg=258.95, stdev=80.03 00:17:16.399 clat percentiles (msec): 00:17:16.399 | 1.00th=[ 82], 5.00th=[ 194], 10.00th=[ 201], 20.00th=[ 211], 00:17:16.399 | 30.00th=[ 213], 40.00th=[ 215], 50.00th=[ 218], 60.00th=[ 228], 00:17:16.399 | 70.00th=[ 266], 80.00th=[ 359], 90.00th=[ 384], 95.00th=[ 393], 00:17:16.399 | 99.00th=[ 439], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 584], 00:17:16.399 | 99.99th=[ 584] 00:17:16.399 bw ( KiB/s): min=40960, max=80384, per=7.58%, avg=63001.60, stdev=15960.40, samples=20 00:17:16.399 iops : min= 160, max= 314, avg=246.10, stdev=62.35, samples=20 00:17:16.399 lat (msec) : 20=0.20%, 50=0.28%, 100=1.07%, 250=66.30%, 500=31.60% 00:17:16.399 lat (msec) : 750=0.55% 00:17:16.399 cpu : usr=0.39%, sys=0.81%, ctx=2882, majf=0, minf=1 00:17:16.399 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:17:16.399 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.399 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.399 issued rwts: total=0,2525,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.399 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.399 job5: (groupid=0, jobs=1): err= 0: pid=89197: Mon Dec 16 01:38:45 2024 00:17:16.399 write: IOPS=299, BW=74.8MiB/s (78.4MB/s)(755MiB/10097msec); 0 zone resets 00:17:16.399 slat (usec): min=18, max=39757, avg=3305.90, stdev=6256.09 00:17:16.399 clat (msec): min=20, max=314, avg=210.60, stdev=76.42 00:17:16.399 lat (msec): min=20, max=314, avg=213.91, stdev=77.40 00:17:16.399 clat percentiles (msec): 00:17:16.399 | 1.00th=[ 87], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 111], 00:17:16.399 | 30.00th=[ 171], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 259], 00:17:16.400 | 70.00th=[ 284], 80.00th=[ 296], 90.00th=[ 300], 95.00th=[ 305], 00:17:16.400 | 99.00th=[ 313], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 317], 00:17:16.400 | 99.99th=[ 317] 00:17:16.400 bw ( KiB/s): min=53248, max=155648, per=9.11%, avg=75699.20, stdev=31002.67, samples=20 00:17:16.400 iops : min= 208, max= 608, avg=295.70, stdev=121.10, samples=20 00:17:16.400 lat (msec) : 50=0.53%, 100=4.83%, 250=53.48%, 500=41.16% 00:17:16.400 cpu : usr=0.55%, sys=0.91%, ctx=2442, majf=0, minf=2 00:17:16.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:17:16.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.400 issued rwts: total=0,3020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.400 job6: (groupid=0, jobs=1): err= 0: pid=89198: Mon Dec 16 01:38:45 2024 00:17:16.400 write: IOPS=211, BW=53.0MiB/s (55.5MB/s)(541MiB/10212msec); 0 zone resets 00:17:16.400 slat (usec): min=18, max=29427, avg=4367.40, stdev=8220.55 00:17:16.400 clat (msec): min=29, max=579, avg=297.65, stdev=66.90 00:17:16.400 lat (msec): min=29, max=579, avg=302.01, stdev=67.72 00:17:16.400 clat percentiles (msec): 00:17:16.400 | 1.00th=[ 91], 5.00th=[ 184], 10.00th=[ 224], 20.00th=[ 257], 00:17:16.400 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 296], 60.00th=[ 305], 00:17:16.400 | 70.00th=[ 309], 80.00th=[ 363], 90.00th=[ 384], 95.00th=[ 393], 00:17:16.400 | 99.00th=[ 460], 99.50th=[ 506], 99.90th=[ 558], 99.95th=[ 584], 00:17:16.400 | 99.99th=[ 584] 00:17:16.400 bw ( KiB/s): min=40960, max=73728, per=6.46%, avg=53734.40, stdev=9572.12, samples=20 00:17:16.400 iops : min= 160, max= 288, avg=209.90, stdev=37.39, samples=20 00:17:16.400 lat (msec) : 50=0.37%, 100=0.74%, 250=17.71%, 500=80.54%, 750=0.65% 00:17:16.400 cpu : usr=0.43%, sys=0.65%, ctx=2503, majf=0, minf=1 00:17:16.400 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:17:16.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.400 issued rwts: total=0,2163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.400 job7: (groupid=0, jobs=1): err= 0: pid=89199: Mon Dec 16 01:38:45 2024 00:17:16.400 write: IOPS=406, BW=102MiB/s (106MB/s)(1028MiB/10123msec); 0 zone resets 00:17:16.400 slat (usec): min=13, max=23694, avg=2415.76, stdev=4563.72 00:17:16.400 clat (msec): min=6, max=309, avg=155.08, stdev=58.29 00:17:16.400 lat (msec): min=6, max=309, avg=157.50, stdev=59.02 00:17:16.400 clat percentiles (msec): 00:17:16.400 | 1.00th=[ 39], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 136], 00:17:16.400 | 30.00th=[ 148], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 163], 00:17:16.400 | 70.00th=[ 201], 80.00th=[ 213], 90.00th=[ 215], 95.00th=[ 220], 00:17:16.400 | 99.00th=[ 255], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 309], 00:17:16.400 | 99.99th=[ 309] 00:17:16.400 bw ( KiB/s): min=73728, max=296960, per=12.47%, avg=103654.40, stdev=48814.13, samples=20 00:17:16.400 iops : min= 288, max= 1160, avg=404.90, stdev=190.68, samples=20 00:17:16.400 lat (msec) : 10=0.12%, 20=0.29%, 50=4.35%, 100=14.30%, 250=79.69% 00:17:16.400 lat (msec) : 500=1.24% 00:17:16.400 cpu : usr=0.77%, sys=1.23%, ctx=5159, majf=0, minf=2 00:17:16.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:17:16.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.400 issued rwts: total=0,4112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.400 job8: (groupid=0, jobs=1): err= 0: pid=89200: Mon Dec 16 01:38:45 2024 00:17:16.400 write: IOPS=260, BW=65.0MiB/s (68.2MB/s)(664MiB/10214msec); 0 zone resets 00:17:16.400 slat (usec): min=18, max=67590, avg=3765.84, stdev=7068.33 00:17:16.400 clat (msec): min=16, max=577, avg=242.25, stdev=81.83 00:17:16.400 lat (msec): min=16, max=577, avg=246.02, stdev=82.81 00:17:16.400 clat percentiles (msec): 00:17:16.400 | 1.00th=[ 63], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 192], 00:17:16.400 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 213], 00:17:16.400 | 70.00th=[ 247], 80.00th=[ 355], 90.00th=[ 384], 95.00th=[ 388], 00:17:16.400 | 99.00th=[ 409], 99.50th=[ 506], 99.90th=[ 550], 99.95th=[ 575], 00:17:16.400 | 99.99th=[ 575] 00:17:16.400 bw ( KiB/s): min=40960, max=83968, per=7.98%, avg=66355.20, stdev=18668.81, samples=20 00:17:16.400 iops : min= 160, max= 328, avg=259.20, stdev=72.93, samples=20 00:17:16.400 lat (msec) : 20=0.15%, 50=0.60%, 100=0.90%, 250=69.05%, 500=28.77% 00:17:16.400 lat (msec) : 750=0.53% 00:17:16.400 cpu : usr=0.40%, sys=0.86%, ctx=2585, majf=0, minf=1 00:17:16.400 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:16.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.400 issued rwts: total=0,2656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.400 job9: (groupid=0, jobs=1): err= 0: pid=89201: Mon Dec 16 01:38:45 2024 00:17:16.400 write: IOPS=296, BW=74.2MiB/s (77.8MB/s)(749MiB/10097msec); 0 zone resets 00:17:16.400 slat (usec): min=17, max=51436, avg=3330.61, stdev=6368.20 00:17:16.400 clat (msec): min=17, max=332, avg=212.23, stdev=79.14 00:17:16.400 lat (msec): min=17, max=333, avg=215.56, stdev=80.18 00:17:16.400 clat percentiles (msec): 00:17:16.400 | 1.00th=[ 83], 5.00th=[ 99], 10.00th=[ 104], 20.00th=[ 111], 00:17:16.400 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 251], 00:17:16.400 | 70.00th=[ 288], 80.00th=[ 300], 90.00th=[ 309], 95.00th=[ 317], 00:17:16.400 | 99.00th=[ 326], 99.50th=[ 330], 99.90th=[ 334], 99.95th=[ 334], 00:17:16.400 | 99.99th=[ 334] 00:17:16.400 bw ( KiB/s): min=51200, max=155648, per=9.04%, avg=75118.75, stdev=31503.81, samples=20 00:17:16.400 iops : min= 200, max= 608, avg=293.40, stdev=123.05, samples=20 00:17:16.400 lat (msec) : 20=0.13%, 50=0.40%, 100=4.94%, 250=54.35%, 500=40.17% 00:17:16.400 cpu : usr=0.55%, sys=0.92%, ctx=3769, majf=0, minf=1 00:17:16.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:17:16.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.400 issued rwts: total=0,2997,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.400 job10: (groupid=0, jobs=1): err= 0: pid=89202: Mon Dec 16 01:38:45 2024 00:17:16.400 write: IOPS=257, BW=64.4MiB/s (67.6MB/s)(657MiB/10200msec); 0 zone resets 00:17:16.400 slat (usec): min=20, max=123820, avg=3739.44, stdev=7320.36 00:17:16.400 clat (msec): min=124, max=581, avg=244.45, stdev=78.94 00:17:16.400 lat (msec): min=124, max=581, avg=248.19, stdev=79.87 00:17:16.400 clat percentiles (msec): 00:17:16.400 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:17:16.400 | 30.00th=[ 194], 40.00th=[ 197], 50.00th=[ 199], 60.00th=[ 207], 00:17:16.400 | 70.00th=[ 249], 80.00th=[ 355], 90.00th=[ 384], 95.00th=[ 388], 00:17:16.400 | 99.00th=[ 414], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 584], 00:17:16.400 | 99.99th=[ 584] 00:17:16.400 bw ( KiB/s): min=40960, max=86016, per=7.90%, avg=65689.60, stdev=18637.53, samples=20 00:17:16.400 iops : min= 160, max= 336, avg=256.60, stdev=72.80, samples=20 00:17:16.400 lat (msec) : 250=70.33%, 500=29.14%, 750=0.53% 00:17:16.400 cpu : usr=0.44%, sys=0.83%, ctx=3309, majf=0, minf=1 00:17:16.400 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:17:16.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:17:16.400 issued rwts: total=0,2629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:16.400 00:17:16.400 Run status group 0 (all jobs): 00:17:16.400 WRITE: bw=812MiB/s (851MB/s), 53.0MiB/s-102MiB/s (55.5MB/s-106MB/s), io=8298MiB (8701MB), run=10056-10221msec 00:17:16.400 00:17:16.400 Disk stats (read/write): 00:17:16.400 nvme0n1: ios=49/6857, merge=0/0, ticks=48/1212696, in_queue=1212744, util=97.80% 00:17:16.400 nvme10n1: ios=49/5216, merge=0/0, ticks=49/1200375, in_queue=1200424, util=97.95% 00:17:16.400 nvme1n1: ios=36/6715, merge=0/0, ticks=26/1217951, in_queue=1217977, util=97.97% 00:17:16.400 nvme2n1: ios=28/6830, merge=0/0, ticks=50/1211661, in_queue=1211711, util=98.02% 00:17:16.400 nvme3n1: ios=5/4920, merge=0/0, ticks=5/1201649, in_queue=1201654, util=98.08% 00:17:16.400 nvme4n1: ios=0/5907, merge=0/0, ticks=0/1216388, in_queue=1216388, util=98.36% 00:17:16.400 nvme5n1: ios=0/4195, merge=0/0, ticks=0/1202068, in_queue=1202068, util=98.33% 00:17:16.400 nvme6n1: ios=0/8087, merge=0/0, ticks=0/1213152, in_queue=1213152, util=98.45% 00:17:16.400 nvme7n1: ios=0/5180, merge=0/0, ticks=0/1199475, in_queue=1199475, util=98.69% 00:17:16.400 nvme8n1: ios=0/5848, merge=0/0, ticks=0/1215059, in_queue=1215059, util=98.81% 00:17:16.400 nvme9n1: ios=0/5128, merge=0/0, ticks=0/1198436, in_queue=1198436, util=98.78% 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:16.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:17:16.400 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.400 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:17:16.400 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:17:16.401 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:17:16.401 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:17:16.401 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:17:16.401 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:17:16.401 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:17:16.401 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:17:16.401 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.401 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:17:16.402 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:17:16.402 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.402 rmmod nvme_tcp 00:17:16.402 rmmod nvme_fabrics 00:17:16.402 rmmod nvme_keyring 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 88524 ']' 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 88524 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 88524 ']' 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 88524 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.402 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88524 00:17:16.402 killing process with pid 88524 00:17:16.402 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.402 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.402 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88524' 00:17:16.402 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 88524 00:17:16.402 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 88524 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:16.661 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:16.920 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:16.920 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.920 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:16.920 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:16.920 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:16.920 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:16.920 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:17:16.921 00:17:16.921 real 0m48.619s 00:17:16.921 user 2m46.323s 00:17:16.921 sys 0m26.012s 00:17:16.921 ************************************ 00:17:16.921 END TEST nvmf_multiconnection 00:17:16.921 ************************************ 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.921 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:17.180 ************************************ 00:17:17.180 START TEST nvmf_initiator_timeout 00:17:17.180 ************************************ 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:17:17.180 * Looking for test storage... 00:17:17.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:17.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.180 --rc genhtml_branch_coverage=1 00:17:17.180 --rc genhtml_function_coverage=1 00:17:17.180 --rc genhtml_legend=1 00:17:17.180 --rc geninfo_all_blocks=1 00:17:17.180 --rc geninfo_unexecuted_blocks=1 00:17:17.180 00:17:17.180 ' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:17.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.180 --rc genhtml_branch_coverage=1 00:17:17.180 --rc genhtml_function_coverage=1 00:17:17.180 --rc genhtml_legend=1 00:17:17.180 --rc geninfo_all_blocks=1 00:17:17.180 --rc geninfo_unexecuted_blocks=1 00:17:17.180 00:17:17.180 ' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:17.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.180 --rc genhtml_branch_coverage=1 00:17:17.180 --rc genhtml_function_coverage=1 00:17:17.180 --rc genhtml_legend=1 00:17:17.180 --rc geninfo_all_blocks=1 00:17:17.180 --rc geninfo_unexecuted_blocks=1 00:17:17.180 00:17:17.180 ' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:17.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.180 --rc genhtml_branch_coverage=1 00:17:17.180 --rc genhtml_function_coverage=1 00:17:17.180 --rc genhtml_legend=1 00:17:17.180 --rc geninfo_all_blocks=1 00:17:17.180 --rc geninfo_unexecuted_blocks=1 00:17:17.180 00:17:17.180 ' 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.180 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.181 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.181 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:17.440 Cannot find device "nvmf_init_br" 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:17.440 Cannot find device "nvmf_init_br2" 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:17:17.440 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:17.440 Cannot find device "nvmf_tgt_br" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.441 Cannot find device "nvmf_tgt_br2" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:17.441 Cannot find device "nvmf_init_br" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:17.441 Cannot find device "nvmf_init_br2" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:17.441 Cannot find device "nvmf_tgt_br" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:17.441 Cannot find device "nvmf_tgt_br2" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:17.441 Cannot find device "nvmf_br" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:17.441 Cannot find device "nvmf_init_if" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:17.441 Cannot find device "nvmf_init_if2" 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.441 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.441 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:17.441 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.441 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.441 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.441 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.441 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.441 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:17.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:17:17.700 00:17:17.700 --- 10.0.0.3 ping statistics --- 00:17:17.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.700 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:17.700 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:17.700 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:17.700 00:17:17.700 --- 10.0.0.4 ping statistics --- 00:17:17.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.700 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:17.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:17.700 00:17:17.700 --- 10.0.0.1 ping statistics --- 00:17:17.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.700 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:17.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:17.700 00:17:17.700 --- 10.0.0.2 ping statistics --- 00:17:17.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.700 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=89617 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 89617 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 89617 ']' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.700 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.701 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:17.701 [2024-12-16 01:38:48.321766] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:17.701 [2024-12-16 01:38:48.321994] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.959 [2024-12-16 01:38:48.467741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.959 [2024-12-16 01:38:48.486978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.959 [2024-12-16 01:38:48.487027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.959 [2024-12-16 01:38:48.487053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.959 [2024-12-16 01:38:48.487060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.959 [2024-12-16 01:38:48.487066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.959 [2024-12-16 01:38:48.487762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.959 [2024-12-16 01:38:48.488588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.959 [2024-12-16 01:38:48.488679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.959 [2024-12-16 01:38:48.488699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.959 [2024-12-16 01:38:48.517436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.959 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.959 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:17:17.959 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:17.959 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.959 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 Malloc0 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 Delay0 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 [2024-12-16 01:38:48.673755] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:18.218 [2024-12-16 01:38:48.701968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.218 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.749 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.749 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.749 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.750 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.750 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.750 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:17:20.750 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=89674 00:17:20.750 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:17:20.750 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:17:20.750 [global] 00:17:20.750 thread=1 00:17:20.750 invalidate=1 00:17:20.750 rw=write 00:17:20.750 time_based=1 00:17:20.750 runtime=60 00:17:20.750 ioengine=libaio 00:17:20.750 direct=1 00:17:20.750 bs=4096 00:17:20.750 iodepth=1 00:17:20.750 norandommap=0 00:17:20.750 numjobs=1 00:17:20.750 00:17:20.750 verify_dump=1 00:17:20.750 verify_backlog=512 00:17:20.750 verify_state_save=0 00:17:20.750 do_verify=1 00:17:20.750 verify=crc32c-intel 00:17:20.750 [job0] 00:17:20.750 filename=/dev/nvme0n1 00:17:20.750 Could not set queue depth (nvme0n1) 00:17:20.750 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.750 fio-3.35 00:17:20.750 Starting 1 thread 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.282 true 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.282 true 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.282 true 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:23.282 true 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.282 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:26.568 true 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:26.568 true 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:26.568 true 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:17:26.568 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.569 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:26.569 true 00:17:26.569 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.569 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:17:26.569 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 89674 00:18:22.802 00:18:22.802 job0: (groupid=0, jobs=1): err= 0: pid=89695: Mon Dec 16 01:39:51 2024 00:18:22.802 read: IOPS=836, BW=3347KiB/s (3428kB/s)(196MiB/60000msec) 00:18:22.802 slat (usec): min=10, max=10354, avg=13.55, stdev=57.49 00:18:22.802 clat (usec): min=155, max=40488k, avg=1004.18, stdev=180686.61 00:18:22.802 lat (usec): min=166, max=40488k, avg=1017.73, stdev=180686.63 00:18:22.802 clat percentiles (usec): 00:18:22.802 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:18:22.802 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:18:22.802 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 239], 00:18:22.802 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 494], 99.95th=[ 611], 00:18:22.802 | 99.99th=[ 857] 00:18:22.802 write: IOPS=844, BW=3379KiB/s (3460kB/s)(198MiB/60000msec); 0 zone resets 00:18:22.802 slat (usec): min=12, max=656, avg=19.31, stdev= 6.83 00:18:22.802 clat (usec): min=15, max=7953, avg=153.24, stdev=59.71 00:18:22.802 lat (usec): min=130, max=7981, avg=172.55, stdev=60.62 00:18:22.802 clat percentiles (usec): 00:18:22.802 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 128], 20.00th=[ 135], 00:18:22.802 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:18:22.802 | 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 192], 00:18:22.802 | 99.00th=[ 221], 99.50th=[ 249], 99.90th=[ 506], 99.95th=[ 725], 00:18:22.802 | 99.99th=[ 1745] 00:18:22.802 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=10133.74, stdev=1580.30, samples=39 00:18:22.802 iops : min= 1024, max= 3072, avg=2533.44, stdev=395.07, samples=39 00:18:22.802 lat (usec) : 20=0.01%, 250=98.45%, 500=1.45%, 750=0.06%, 1000=0.02% 00:18:22.802 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:18:22.802 cpu : usr=0.53%, sys=2.26%, ctx=100905, majf=0, minf=5 00:18:22.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:22.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.802 issued rwts: total=50210,50688,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.802 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:22.802 00:18:22.802 Run status group 0 (all jobs): 00:18:22.802 READ: bw=3347KiB/s (3428kB/s), 3347KiB/s-3347KiB/s (3428kB/s-3428kB/s), io=196MiB (206MB), run=60000-60000msec 00:18:22.802 WRITE: bw=3379KiB/s (3460kB/s), 3379KiB/s-3379KiB/s (3460kB/s-3460kB/s), io=198MiB (208MB), run=60000-60000msec 00:18:22.802 00:18:22.802 Disk stats (read/write): 00:18:22.802 nvme0n1: ios=50430/50176, merge=0/0, ticks=10263/8164, in_queue=18427, util=99.75% 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:22.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:22.802 nvmf hotplug test: fio successful as expected 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:22.802 rmmod nvme_tcp 00:18:22.802 rmmod nvme_fabrics 00:18:22.802 rmmod nvme_keyring 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 89617 ']' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 89617 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 89617 ']' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 89617 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89617 00:18:22.802 killing process with pid 89617 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89617' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 89617 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 89617 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:22.802 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:18:22.803 ************************************ 00:18:22.803 END TEST nvmf_initiator_timeout 00:18:22.803 ************************************ 00:18:22.803 00:18:22.803 real 1m4.092s 00:18:22.803 user 3m50.154s 00:18:22.803 sys 0m21.900s 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:22.803 ************************************ 00:18:22.803 START TEST nvmf_nsid 00:18:22.803 ************************************ 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:18:22.803 * Looking for test storage... 00:18:22.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.803 --rc genhtml_branch_coverage=1 00:18:22.803 --rc genhtml_function_coverage=1 00:18:22.803 --rc genhtml_legend=1 00:18:22.803 --rc geninfo_all_blocks=1 00:18:22.803 --rc geninfo_unexecuted_blocks=1 00:18:22.803 00:18:22.803 ' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.803 --rc genhtml_branch_coverage=1 00:18:22.803 --rc genhtml_function_coverage=1 00:18:22.803 --rc genhtml_legend=1 00:18:22.803 --rc geninfo_all_blocks=1 00:18:22.803 --rc geninfo_unexecuted_blocks=1 00:18:22.803 00:18:22.803 ' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.803 --rc genhtml_branch_coverage=1 00:18:22.803 --rc genhtml_function_coverage=1 00:18:22.803 --rc genhtml_legend=1 00:18:22.803 --rc geninfo_all_blocks=1 00:18:22.803 --rc geninfo_unexecuted_blocks=1 00:18:22.803 00:18:22.803 ' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:22.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.803 --rc genhtml_branch_coverage=1 00:18:22.803 --rc genhtml_function_coverage=1 00:18:22.803 --rc genhtml_legend=1 00:18:22.803 --rc geninfo_all_blocks=1 00:18:22.803 --rc geninfo_unexecuted_blocks=1 00:18:22.803 00:18:22.803 ' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.803 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:22.804 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:22.804 Cannot find device "nvmf_init_br" 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:18:22.804 01:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:22.804 Cannot find device "nvmf_init_br2" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:22.804 Cannot find device "nvmf_tgt_br" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.804 Cannot find device "nvmf_tgt_br2" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:22.804 Cannot find device "nvmf_init_br" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:22.804 Cannot find device "nvmf_init_br2" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:22.804 Cannot find device "nvmf_tgt_br" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:22.804 Cannot find device "nvmf_tgt_br2" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:22.804 Cannot find device "nvmf_br" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:22.804 Cannot find device "nvmf_init_if" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:22.804 Cannot find device "nvmf_init_if2" 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.804 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:22.804 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:22.805 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:22.805 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:22.805 00:18:22.805 --- 10.0.0.3 ping statistics --- 00:18:22.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.805 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:22.805 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:22.805 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:18:22.805 00:18:22.805 --- 10.0.0.4 ping statistics --- 00:18:22.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.805 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:22.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:22.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:22.805 00:18:22.805 --- 10.0.0.1 ping statistics --- 00:18:22.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.805 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:22.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:22.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:18:22.805 00:18:22.805 --- 10.0.0.2 ping statistics --- 00:18:22.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:22.805 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=90574 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 90574 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 90574 ']' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:22.805 [2024-12-16 01:39:52.456793] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:22.805 [2024-12-16 01:39:52.456909] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.805 [2024-12-16 01:39:52.613686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.805 [2024-12-16 01:39:52.637317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.805 [2024-12-16 01:39:52.637379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.805 [2024-12-16 01:39:52.637393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.805 [2024-12-16 01:39:52.637403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.805 [2024-12-16 01:39:52.637412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.805 [2024-12-16 01:39:52.637775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.805 [2024-12-16 01:39:52.670979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=90597 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cbad7e4a-bf70-4e2b-938b-ecb50ef5ed8b 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1b04ff3b-b967-4290-9279-1dd52da1aa07 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=eea1c24d-1e54-4b9e-b5d8-2d2688e6ece8 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:22.805 null0 00:18:22.805 null1 00:18:22.805 null2 00:18:22.805 [2024-12-16 01:39:52.815105] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.805 [2024-12-16 01:39:52.827368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:22.805 [2024-12-16 01:39:52.827434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90597 ] 00:18:22.805 [2024-12-16 01:39:52.839245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:22.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 90597 /var/tmp/tgt2.sock 00:18:22.805 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 90597 ']' 00:18:22.806 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:18:22.806 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.806 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:18:22.806 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.806 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:22.806 [2024-12-16 01:39:52.978125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.806 [2024-12-16 01:39:53.002922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.806 [2024-12-16 01:39:53.046329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:22.806 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.806 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:18:22.806 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:18:23.065 [2024-12-16 01:39:53.540959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.065 [2024-12-16 01:39:53.557037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:18:23.065 nvme0n1 nvme0n2 00:18:23.065 nvme1n1 00:18:23.065 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:18:23.065 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:18:23.065 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:18:23.324 01:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:18:24.260 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:24.260 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cbad7e4a-bf70-4e2b-938b-ecb50ef5ed8b 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cbad7e4abf704e2b938becb50ef5ed8b 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CBAD7E4ABF704E2B938BECB50EF5ED8B 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CBAD7E4ABF704E2B938BECB50EF5ED8B == \C\B\A\D\7\E\4\A\B\F\7\0\4\E\2\B\9\3\8\B\E\C\B\5\0\E\F\5\E\D\8\B ]] 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1b04ff3b-b967-4290-9279-1dd52da1aa07 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:18:24.261 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1b04ff3bb967429092791dd52da1aa07 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1B04FF3BB967429092791DD52DA1AA07 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1B04FF3BB967429092791DD52DA1AA07 == \1\B\0\4\F\F\3\B\B\9\6\7\4\2\9\0\9\2\7\9\1\D\D\5\2\D\A\1\A\A\0\7 ]] 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid eea1c24d-1e54-4b9e-b5d8-2d2688e6ece8 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:18:24.520 01:39:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=eea1c24d1e544b9eb5d82d2688e6ece8 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EEA1C24D1E544B9EB5D82D2688E6ECE8 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ EEA1C24D1E544B9EB5D82D2688E6ECE8 == \E\E\A\1\C\2\4\D\1\E\5\4\4\B\9\E\B\5\D\8\2\D\2\6\8\8\E\6\E\C\E\8 ]] 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 90597 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 90597 ']' 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 90597 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:24.520 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.778 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90597 00:18:24.778 killing process with pid 90597 00:18:24.778 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.778 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.778 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90597' 00:18:24.778 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 90597 00:18:24.778 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 90597 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:25.038 rmmod nvme_tcp 00:18:25.038 rmmod nvme_fabrics 00:18:25.038 rmmod nvme_keyring 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 90574 ']' 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 90574 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 90574 ']' 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 90574 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90574 00:18:25.038 killing process with pid 90574 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90574' 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 90574 00:18:25.038 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 90574 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:25.296 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.555 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.555 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:25.555 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.555 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.555 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.555 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:18:25.555 00:18:25.555 real 0m4.281s 00:18:25.555 user 0m6.228s 00:18:25.555 sys 0m1.559s 00:18:25.555 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.555 01:39:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:18:25.555 ************************************ 00:18:25.555 END TEST nvmf_nsid 00:18:25.555 ************************************ 00:18:25.555 01:39:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:18:25.555 ************************************ 00:18:25.556 END TEST nvmf_target_extra 00:18:25.556 ************************************ 00:18:25.556 00:18:25.556 real 6m51.699s 00:18:25.556 user 17m6.229s 00:18:25.556 sys 1m53.704s 00:18:25.556 01:39:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.556 01:39:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:25.556 01:39:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:25.556 01:39:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.556 01:39:56 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.556 01:39:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.556 ************************************ 00:18:25.556 START TEST nvmf_host 00:18:25.556 ************************************ 00:18:25.556 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:18:25.556 * Looking for test storage... 00:18:25.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:25.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.815 --rc genhtml_branch_coverage=1 00:18:25.815 --rc genhtml_function_coverage=1 00:18:25.815 --rc genhtml_legend=1 00:18:25.815 --rc geninfo_all_blocks=1 00:18:25.815 --rc geninfo_unexecuted_blocks=1 00:18:25.815 00:18:25.815 ' 00:18:25.815 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:25.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.815 --rc genhtml_branch_coverage=1 00:18:25.815 --rc genhtml_function_coverage=1 00:18:25.815 --rc genhtml_legend=1 00:18:25.815 --rc geninfo_all_blocks=1 00:18:25.816 --rc geninfo_unexecuted_blocks=1 00:18:25.816 00:18:25.816 ' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:25.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.816 --rc genhtml_branch_coverage=1 00:18:25.816 --rc genhtml_function_coverage=1 00:18:25.816 --rc genhtml_legend=1 00:18:25.816 --rc geninfo_all_blocks=1 00:18:25.816 --rc geninfo_unexecuted_blocks=1 00:18:25.816 00:18:25.816 ' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:25.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.816 --rc genhtml_branch_coverage=1 00:18:25.816 --rc genhtml_function_coverage=1 00:18:25.816 --rc genhtml_legend=1 00:18:25.816 --rc geninfo_all_blocks=1 00:18:25.816 --rc geninfo_unexecuted_blocks=1 00:18:25.816 00:18:25.816 ' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.816 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.816 ************************************ 00:18:25.816 START TEST nvmf_identify 00:18:25.816 ************************************ 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:18:25.816 * Looking for test storage... 00:18:25.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:18:25.816 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:26.076 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:26.076 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.076 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.076 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.077 --rc genhtml_branch_coverage=1 00:18:26.077 --rc genhtml_function_coverage=1 00:18:26.077 --rc genhtml_legend=1 00:18:26.077 --rc geninfo_all_blocks=1 00:18:26.077 --rc geninfo_unexecuted_blocks=1 00:18:26.077 00:18:26.077 ' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.077 --rc genhtml_branch_coverage=1 00:18:26.077 --rc genhtml_function_coverage=1 00:18:26.077 --rc genhtml_legend=1 00:18:26.077 --rc geninfo_all_blocks=1 00:18:26.077 --rc geninfo_unexecuted_blocks=1 00:18:26.077 00:18:26.077 ' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.077 --rc genhtml_branch_coverage=1 00:18:26.077 --rc genhtml_function_coverage=1 00:18:26.077 --rc genhtml_legend=1 00:18:26.077 --rc geninfo_all_blocks=1 00:18:26.077 --rc geninfo_unexecuted_blocks=1 00:18:26.077 00:18:26.077 ' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:26.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.077 --rc genhtml_branch_coverage=1 00:18:26.077 --rc genhtml_function_coverage=1 00:18:26.077 --rc genhtml_legend=1 00:18:26.077 --rc geninfo_all_blocks=1 00:18:26.077 --rc geninfo_unexecuted_blocks=1 00:18:26.077 00:18:26.077 ' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.077 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.077 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:26.078 Cannot find device "nvmf_init_br" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:26.078 Cannot find device "nvmf_init_br2" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:26.078 Cannot find device "nvmf_tgt_br" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:26.078 Cannot find device "nvmf_tgt_br2" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:26.078 Cannot find device "nvmf_init_br" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:26.078 Cannot find device "nvmf_init_br2" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:26.078 Cannot find device "nvmf_tgt_br" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:26.078 Cannot find device "nvmf_tgt_br2" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:26.078 Cannot find device "nvmf_br" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:26.078 Cannot find device "nvmf_init_if" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:26.078 Cannot find device "nvmf_init_if2" 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:26.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:26.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:26.078 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:26.337 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:26.337 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:26.337 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:26.337 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:26.337 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:26.337 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:26.338 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.338 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:18:26.338 00:18:26.338 --- 10.0.0.3 ping statistics --- 00:18:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.338 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:26.338 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:26.338 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:18:26.338 00:18:26.338 --- 10.0.0.4 ping statistics --- 00:18:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.338 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:26.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:26.338 00:18:26.338 --- 10.0.0.1 ping statistics --- 00:18:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.338 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:26.338 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:26.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:18:26.338 00:18:26.338 --- 10.0.0.2 ping statistics --- 00:18:26.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.338 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:26.597 01:39:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=90951 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 90951 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 90951 ']' 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.597 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.597 [2024-12-16 01:39:57.089807] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:26.597 [2024-12-16 01:39:57.090131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.597 [2024-12-16 01:39:57.249816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.857 [2024-12-16 01:39:57.276349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.857 [2024-12-16 01:39:57.276668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.857 [2024-12-16 01:39:57.276900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.857 [2024-12-16 01:39:57.277105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.857 [2024-12-16 01:39:57.277215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.857 [2024-12-16 01:39:57.278345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.857 [2024-12-16 01:39:57.278438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.857 [2024-12-16 01:39:57.278595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.857 [2024-12-16 01:39:57.278597] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.857 [2024-12-16 01:39:57.312953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 [2024-12-16 01:39:57.369427] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 Malloc0 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 [2024-12-16 01:39:57.470037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:26.857 [ 00:18:26.857 { 00:18:26.857 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:26.857 "subtype": "Discovery", 00:18:26.857 "listen_addresses": [ 00:18:26.857 { 00:18:26.857 "trtype": "TCP", 00:18:26.857 "adrfam": "IPv4", 00:18:26.857 "traddr": "10.0.0.3", 00:18:26.857 "trsvcid": "4420" 00:18:26.857 } 00:18:26.857 ], 00:18:26.857 "allow_any_host": true, 00:18:26.857 "hosts": [] 00:18:26.857 }, 00:18:26.857 { 00:18:26.857 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.857 "subtype": "NVMe", 00:18:26.857 "listen_addresses": [ 00:18:26.857 { 00:18:26.857 "trtype": "TCP", 00:18:26.857 "adrfam": "IPv4", 00:18:26.857 "traddr": "10.0.0.3", 00:18:26.857 "trsvcid": "4420" 00:18:26.857 } 00:18:26.857 ], 00:18:26.857 "allow_any_host": true, 00:18:26.857 "hosts": [], 00:18:26.857 "serial_number": "SPDK00000000000001", 00:18:26.857 "model_number": "SPDK bdev Controller", 00:18:26.857 "max_namespaces": 32, 00:18:26.857 "min_cntlid": 1, 00:18:26.857 "max_cntlid": 65519, 00:18:26.857 "namespaces": [ 00:18:26.857 { 00:18:26.857 "nsid": 1, 00:18:26.857 "bdev_name": "Malloc0", 00:18:26.857 "name": "Malloc0", 00:18:26.857 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:26.857 "eui64": "ABCDEF0123456789", 00:18:26.857 "uuid": "f8efe774-030d-4b9c-b58d-85ecd952178d" 00:18:26.857 } 00:18:26.857 ] 00:18:26.857 } 00:18:26.857 ] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.857 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:27.119 [2024-12-16 01:39:57.529082] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:27.120 [2024-12-16 01:39:57.529147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90977 ] 00:18:27.120 [2024-12-16 01:39:57.684348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:18:27.120 [2024-12-16 01:39:57.684419] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:27.120 [2024-12-16 01:39:57.684426] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:27.120 [2024-12-16 01:39:57.684436] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:27.120 [2024-12-16 01:39:57.684444] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:27.120 [2024-12-16 01:39:57.688764] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:18:27.120 [2024-12-16 01:39:57.688848] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15cbb00 0 00:18:27.120 [2024-12-16 01:39:57.696562] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:27.120 [2024-12-16 01:39:57.696595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:27.120 [2024-12-16 01:39:57.696617] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:27.120 [2024-12-16 01:39:57.696621] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:27.120 [2024-12-16 01:39:57.696650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.696657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.696661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.120 [2024-12-16 01:39:57.696675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:27.120 [2024-12-16 01:39:57.696705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.120 [2024-12-16 01:39:57.704561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.120 [2024-12-16 01:39:57.704584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.120 [2024-12-16 01:39:57.704605] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.120 [2024-12-16 01:39:57.704621] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:27.120 [2024-12-16 01:39:57.704629] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:18:27.120 [2024-12-16 01:39:57.704635] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:18:27.120 [2024-12-16 01:39:57.704651] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.120 [2024-12-16 01:39:57.704669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.120 [2024-12-16 01:39:57.704697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.120 [2024-12-16 01:39:57.704756] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.120 [2024-12-16 01:39:57.704764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.120 [2024-12-16 01:39:57.704768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.120 [2024-12-16 01:39:57.704778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:18:27.120 [2024-12-16 01:39:57.704786] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:18:27.120 [2024-12-16 01:39:57.704794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.120 [2024-12-16 01:39:57.704810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.120 [2024-12-16 01:39:57.704845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.120 [2024-12-16 01:39:57.704895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.120 [2024-12-16 01:39:57.704902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.120 [2024-12-16 01:39:57.704906] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.120 [2024-12-16 01:39:57.704931] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:18:27.120 [2024-12-16 01:39:57.704939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:27.120 [2024-12-16 01:39:57.704947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.704955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.120 [2024-12-16 01:39:57.704962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.120 [2024-12-16 01:39:57.704980] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.120 [2024-12-16 01:39:57.705026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.120 [2024-12-16 01:39:57.705033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.120 [2024-12-16 01:39:57.705036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.120 [2024-12-16 01:39:57.705046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:27.120 [2024-12-16 01:39:57.705056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.120 [2024-12-16 01:39:57.705072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.120 [2024-12-16 01:39:57.705089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.120 [2024-12-16 01:39:57.705134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.120 [2024-12-16 01:39:57.705140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.120 [2024-12-16 01:39:57.705144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.120 [2024-12-16 01:39:57.705153] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:27.120 [2024-12-16 01:39:57.705159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:27.120 [2024-12-16 01:39:57.705166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:27.120 [2024-12-16 01:39:57.705277] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:18:27.120 [2024-12-16 01:39:57.705283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:27.120 [2024-12-16 01:39:57.705293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.120 [2024-12-16 01:39:57.705309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.120 [2024-12-16 01:39:57.705328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.120 [2024-12-16 01:39:57.705372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.120 [2024-12-16 01:39:57.705379] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.120 [2024-12-16 01:39:57.705383] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705387] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.120 [2024-12-16 01:39:57.705392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:27.120 [2024-12-16 01:39:57.705402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.120 [2024-12-16 01:39:57.705418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.120 [2024-12-16 01:39:57.705435] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.120 [2024-12-16 01:39:57.705475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.120 [2024-12-16 01:39:57.705482] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.120 [2024-12-16 01:39:57.705486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.120 [2024-12-16 01:39:57.705489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.120 [2024-12-16 01:39:57.705494] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:27.120 [2024-12-16 01:39:57.705500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:27.120 [2024-12-16 01:39:57.705507] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:18:27.120 [2024-12-16 01:39:57.705517] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:27.120 [2024-12-16 01:39:57.705527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.705539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.121 [2024-12-16 01:39:57.705558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.121 [2024-12-16 01:39:57.705656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.121 [2024-12-16 01:39:57.705664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.121 [2024-12-16 01:39:57.705668] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705672] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15cbb00): datao=0, datal=4096, cccid=0 00:18:27.121 [2024-12-16 01:39:57.705678] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1611fc0) on tqpair(0x15cbb00): expected_datao=0, payload_size=4096 00:18:27.121 [2024-12-16 01:39:57.705683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705691] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705695] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705704] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.121 [2024-12-16 01:39:57.705710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.121 [2024-12-16 01:39:57.705713] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.121 [2024-12-16 01:39:57.705726] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:18:27.121 [2024-12-16 01:39:57.705732] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:18:27.121 [2024-12-16 01:39:57.705736] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:18:27.121 [2024-12-16 01:39:57.705742] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:18:27.121 [2024-12-16 01:39:57.705746] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:18:27.121 [2024-12-16 01:39:57.705752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:18:27.121 [2024-12-16 01:39:57.705765] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:27.121 [2024-12-16 01:39:57.705775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.705791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:27.121 [2024-12-16 01:39:57.705812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.121 [2024-12-16 01:39:57.705864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.121 [2024-12-16 01:39:57.705871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.121 [2024-12-16 01:39:57.705874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705878] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.121 [2024-12-16 01:39:57.705886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.705901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.121 [2024-12-16 01:39:57.705908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.705921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.121 [2024-12-16 01:39:57.705927] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.705940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.121 [2024-12-16 01:39:57.705946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.705960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.121 [2024-12-16 01:39:57.705965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:27.121 [2024-12-16 01:39:57.705978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:27.121 [2024-12-16 01:39:57.705985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.705989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.705997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.121 [2024-12-16 01:39:57.706017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1611fc0, cid 0, qid 0 00:18:27.121 [2024-12-16 01:39:57.706024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612140, cid 1, qid 0 00:18:27.121 [2024-12-16 01:39:57.706029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16122c0, cid 2, qid 0 00:18:27.121 [2024-12-16 01:39:57.706034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.121 [2024-12-16 01:39:57.706039] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16125c0, cid 4, qid 0 00:18:27.121 [2024-12-16 01:39:57.706165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.121 [2024-12-16 01:39:57.706174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.121 [2024-12-16 01:39:57.706178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16125c0) on tqpair=0x15cbb00 00:18:27.121 [2024-12-16 01:39:57.706188] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:18:27.121 [2024-12-16 01:39:57.706194] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:18:27.121 [2024-12-16 01:39:57.706205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.706218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.121 [2024-12-16 01:39:57.706238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16125c0, cid 4, qid 0 00:18:27.121 [2024-12-16 01:39:57.706300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.121 [2024-12-16 01:39:57.706307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.121 [2024-12-16 01:39:57.706311] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706315] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15cbb00): datao=0, datal=4096, cccid=4 00:18:27.121 [2024-12-16 01:39:57.706320] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16125c0) on tqpair(0x15cbb00): expected_datao=0, payload_size=4096 00:18:27.121 [2024-12-16 01:39:57.706325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706332] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706337] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.121 [2024-12-16 01:39:57.706353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.121 [2024-12-16 01:39:57.706357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16125c0) on tqpair=0x15cbb00 00:18:27.121 [2024-12-16 01:39:57.706389] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:18:27.121 [2024-12-16 01:39:57.706417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.706431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.121 [2024-12-16 01:39:57.706439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15cbb00) 00:18:27.121 [2024-12-16 01:39:57.706467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.121 [2024-12-16 01:39:57.706493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16125c0, cid 4, qid 0 00:18:27.121 [2024-12-16 01:39:57.706501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612740, cid 5, qid 0 00:18:27.121 [2024-12-16 01:39:57.706624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.121 [2024-12-16 01:39:57.706633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.121 [2024-12-16 01:39:57.706636] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706640] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15cbb00): datao=0, datal=1024, cccid=4 00:18:27.121 [2024-12-16 01:39:57.706645] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16125c0) on tqpair(0x15cbb00): expected_datao=0, payload_size=1024 00:18:27.121 [2024-12-16 01:39:57.706650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706657] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706660] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.121 [2024-12-16 01:39:57.706672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.121 [2024-12-16 01:39:57.706676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.121 [2024-12-16 01:39:57.706680] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612740) on tqpair=0x15cbb00 00:18:27.121 [2024-12-16 01:39:57.706698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.121 [2024-12-16 01:39:57.706705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.122 [2024-12-16 01:39:57.706709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16125c0) on tqpair=0x15cbb00 00:18:27.122 [2024-12-16 01:39:57.706725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15cbb00) 00:18:27.122 [2024-12-16 01:39:57.706737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.122 [2024-12-16 01:39:57.706761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16125c0, cid 4, qid 0 00:18:27.122 [2024-12-16 01:39:57.706828] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.122 [2024-12-16 01:39:57.706836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.122 [2024-12-16 01:39:57.706840] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706844] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15cbb00): datao=0, datal=3072, cccid=4 00:18:27.122 [2024-12-16 01:39:57.706849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16125c0) on tqpair(0x15cbb00): expected_datao=0, payload_size=3072 00:18:27.122 [2024-12-16 01:39:57.706853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706860] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706864] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.122 [2024-12-16 01:39:57.706878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.122 [2024-12-16 01:39:57.706881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16125c0) on tqpair=0x15cbb00 00:18:27.122 [2024-12-16 01:39:57.706895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.706899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15cbb00) 00:18:27.122 [2024-12-16 01:39:57.706906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.122 [2024-12-16 01:39:57.706929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16125c0, cid 4, qid 0 00:18:27.122 [2024-12-16 01:39:57.706991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.122 [2024-12-16 01:39:57.706998] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.122 [2024-12-16 01:39:57.707002] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.707006] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15cbb00): datao=0, datal=8, cccid=4 00:18:27.122 [2024-12-16 01:39:57.707010] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16125c0) on tqpair(0x15cbb00): expected_datao=0, payload_size=8 00:18:27.122 [2024-12-16 01:39:57.707015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.707022] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.707025] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.707040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.122 [2024-12-16 01:39:57.707047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.122 [2024-12-16 01:39:57.707050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.122 [2024-12-16 01:39:57.707054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16125c0) on tqpair=0x15cbb00 00:18:27.122 ===================================================== 00:18:27.122 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:27.122 ===================================================== 00:18:27.122 Controller Capabilities/Features 00:18:27.122 ================================ 00:18:27.122 Vendor ID: 0000 00:18:27.122 Subsystem Vendor ID: 0000 00:18:27.122 Serial Number: .................... 00:18:27.122 Model Number: ........................................ 00:18:27.122 Firmware Version: 25.01 00:18:27.122 Recommended Arb Burst: 0 00:18:27.122 IEEE OUI Identifier: 00 00 00 00:18:27.122 Multi-path I/O 00:18:27.122 May have multiple subsystem ports: No 00:18:27.122 May have multiple controllers: No 00:18:27.122 Associated with SR-IOV VF: No 00:18:27.122 Max Data Transfer Size: 131072 00:18:27.122 Max Number of Namespaces: 0 00:18:27.122 Max Number of I/O Queues: 1024 00:18:27.122 NVMe Specification Version (VS): 1.3 00:18:27.122 NVMe Specification Version (Identify): 1.3 00:18:27.122 Maximum Queue Entries: 128 00:18:27.122 Contiguous Queues Required: Yes 00:18:27.122 Arbitration Mechanisms Supported 00:18:27.122 Weighted Round Robin: Not Supported 00:18:27.122 Vendor Specific: Not Supported 00:18:27.122 Reset Timeout: 15000 ms 00:18:27.122 Doorbell Stride: 4 bytes 00:18:27.122 NVM Subsystem Reset: Not Supported 00:18:27.122 Command Sets Supported 00:18:27.122 NVM Command Set: Supported 00:18:27.122 Boot Partition: Not Supported 00:18:27.122 Memory Page Size Minimum: 4096 bytes 00:18:27.122 Memory Page Size Maximum: 4096 bytes 00:18:27.122 Persistent Memory Region: Not Supported 00:18:27.122 Optional Asynchronous Events Supported 00:18:27.122 Namespace Attribute Notices: Not Supported 00:18:27.122 Firmware Activation Notices: Not Supported 00:18:27.122 ANA Change Notices: Not Supported 00:18:27.122 PLE Aggregate Log Change Notices: Not Supported 00:18:27.122 LBA Status Info Alert Notices: Not Supported 00:18:27.122 EGE Aggregate Log Change Notices: Not Supported 00:18:27.122 Normal NVM Subsystem Shutdown event: Not Supported 00:18:27.122 Zone Descriptor Change Notices: Not Supported 00:18:27.122 Discovery Log Change Notices: Supported 00:18:27.122 Controller Attributes 00:18:27.122 128-bit Host Identifier: Not Supported 00:18:27.122 Non-Operational Permissive Mode: Not Supported 00:18:27.122 NVM Sets: Not Supported 00:18:27.122 Read Recovery Levels: Not Supported 00:18:27.122 Endurance Groups: Not Supported 00:18:27.122 Predictable Latency Mode: Not Supported 00:18:27.122 Traffic Based Keep ALive: Not Supported 00:18:27.122 Namespace Granularity: Not Supported 00:18:27.122 SQ Associations: Not Supported 00:18:27.122 UUID List: Not Supported 00:18:27.122 Multi-Domain Subsystem: Not Supported 00:18:27.122 Fixed Capacity Management: Not Supported 00:18:27.122 Variable Capacity Management: Not Supported 00:18:27.122 Delete Endurance Group: Not Supported 00:18:27.122 Delete NVM Set: Not Supported 00:18:27.122 Extended LBA Formats Supported: Not Supported 00:18:27.122 Flexible Data Placement Supported: Not Supported 00:18:27.122 00:18:27.122 Controller Memory Buffer Support 00:18:27.122 ================================ 00:18:27.122 Supported: No 00:18:27.122 00:18:27.122 Persistent Memory Region Support 00:18:27.122 ================================ 00:18:27.122 Supported: No 00:18:27.122 00:18:27.122 Admin Command Set Attributes 00:18:27.122 ============================ 00:18:27.122 Security Send/Receive: Not Supported 00:18:27.122 Format NVM: Not Supported 00:18:27.122 Firmware Activate/Download: Not Supported 00:18:27.122 Namespace Management: Not Supported 00:18:27.122 Device Self-Test: Not Supported 00:18:27.122 Directives: Not Supported 00:18:27.122 NVMe-MI: Not Supported 00:18:27.122 Virtualization Management: Not Supported 00:18:27.122 Doorbell Buffer Config: Not Supported 00:18:27.122 Get LBA Status Capability: Not Supported 00:18:27.122 Command & Feature Lockdown Capability: Not Supported 00:18:27.122 Abort Command Limit: 1 00:18:27.122 Async Event Request Limit: 4 00:18:27.122 Number of Firmware Slots: N/A 00:18:27.122 Firmware Slot 1 Read-Only: N/A 00:18:27.122 Firmware Activation Without Reset: N/A 00:18:27.122 Multiple Update Detection Support: N/A 00:18:27.122 Firmware Update Granularity: No Information Provided 00:18:27.122 Per-Namespace SMART Log: No 00:18:27.122 Asymmetric Namespace Access Log Page: Not Supported 00:18:27.122 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:27.122 Command Effects Log Page: Not Supported 00:18:27.122 Get Log Page Extended Data: Supported 00:18:27.122 Telemetry Log Pages: Not Supported 00:18:27.122 Persistent Event Log Pages: Not Supported 00:18:27.122 Supported Log Pages Log Page: May Support 00:18:27.122 Commands Supported & Effects Log Page: Not Supported 00:18:27.122 Feature Identifiers & Effects Log Page:May Support 00:18:27.122 NVMe-MI Commands & Effects Log Page: May Support 00:18:27.122 Data Area 4 for Telemetry Log: Not Supported 00:18:27.122 Error Log Page Entries Supported: 128 00:18:27.122 Keep Alive: Not Supported 00:18:27.122 00:18:27.122 NVM Command Set Attributes 00:18:27.122 ========================== 00:18:27.122 Submission Queue Entry Size 00:18:27.122 Max: 1 00:18:27.122 Min: 1 00:18:27.122 Completion Queue Entry Size 00:18:27.122 Max: 1 00:18:27.122 Min: 1 00:18:27.122 Number of Namespaces: 0 00:18:27.122 Compare Command: Not Supported 00:18:27.122 Write Uncorrectable Command: Not Supported 00:18:27.122 Dataset Management Command: Not Supported 00:18:27.122 Write Zeroes Command: Not Supported 00:18:27.122 Set Features Save Field: Not Supported 00:18:27.122 Reservations: Not Supported 00:18:27.122 Timestamp: Not Supported 00:18:27.123 Copy: Not Supported 00:18:27.123 Volatile Write Cache: Not Present 00:18:27.123 Atomic Write Unit (Normal): 1 00:18:27.123 Atomic Write Unit (PFail): 1 00:18:27.123 Atomic Compare & Write Unit: 1 00:18:27.123 Fused Compare & Write: Supported 00:18:27.123 Scatter-Gather List 00:18:27.123 SGL Command Set: Supported 00:18:27.123 SGL Keyed: Supported 00:18:27.123 SGL Bit Bucket Descriptor: Not Supported 00:18:27.123 SGL Metadata Pointer: Not Supported 00:18:27.123 Oversized SGL: Not Supported 00:18:27.123 SGL Metadata Address: Not Supported 00:18:27.123 SGL Offset: Supported 00:18:27.123 Transport SGL Data Block: Not Supported 00:18:27.123 Replay Protected Memory Block: Not Supported 00:18:27.123 00:18:27.123 Firmware Slot Information 00:18:27.123 ========================= 00:18:27.123 Active slot: 0 00:18:27.123 00:18:27.123 00:18:27.123 Error Log 00:18:27.123 ========= 00:18:27.123 00:18:27.123 Active Namespaces 00:18:27.123 ================= 00:18:27.123 Discovery Log Page 00:18:27.123 ================== 00:18:27.123 Generation Counter: 2 00:18:27.123 Number of Records: 2 00:18:27.123 Record Format: 0 00:18:27.123 00:18:27.123 Discovery Log Entry 0 00:18:27.123 ---------------------- 00:18:27.123 Transport Type: 3 (TCP) 00:18:27.123 Address Family: 1 (IPv4) 00:18:27.123 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:27.123 Entry Flags: 00:18:27.123 Duplicate Returned Information: 1 00:18:27.123 Explicit Persistent Connection Support for Discovery: 1 00:18:27.123 Transport Requirements: 00:18:27.123 Secure Channel: Not Required 00:18:27.123 Port ID: 0 (0x0000) 00:18:27.123 Controller ID: 65535 (0xffff) 00:18:27.123 Admin Max SQ Size: 128 00:18:27.123 Transport Service Identifier: 4420 00:18:27.123 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:27.123 Transport Address: 10.0.0.3 00:18:27.123 Discovery Log Entry 1 00:18:27.123 ---------------------- 00:18:27.123 Transport Type: 3 (TCP) 00:18:27.123 Address Family: 1 (IPv4) 00:18:27.123 Subsystem Type: 2 (NVM Subsystem) 00:18:27.123 Entry Flags: 00:18:27.123 Duplicate Returned Information: 0 00:18:27.123 Explicit Persistent Connection Support for Discovery: 0 00:18:27.123 Transport Requirements: 00:18:27.123 Secure Channel: Not Required 00:18:27.123 Port ID: 0 (0x0000) 00:18:27.123 Controller ID: 65535 (0xffff) 00:18:27.123 Admin Max SQ Size: 128 00:18:27.123 Transport Service Identifier: 4420 00:18:27.123 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:27.123 Transport Address: 10.0.0.3 [2024-12-16 01:39:57.707142] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:18:27.123 [2024-12-16 01:39:57.707156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1611fc0) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.123 [2024-12-16 01:39:57.707169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612140) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.123 [2024-12-16 01:39:57.707179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16122c0) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.123 [2024-12-16 01:39:57.707188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.123 [2024-12-16 01:39:57.707202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.123 [2024-12-16 01:39:57.707218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.123 [2024-12-16 01:39:57.707240] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.123 [2024-12-16 01:39:57.707288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.123 [2024-12-16 01:39:57.707295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.123 [2024-12-16 01:39:57.707299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.123 [2024-12-16 01:39:57.707326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.123 [2024-12-16 01:39:57.707347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.123 [2024-12-16 01:39:57.707411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.123 [2024-12-16 01:39:57.707418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.123 [2024-12-16 01:39:57.707422] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707426] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707431] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:18:27.123 [2024-12-16 01:39:57.707436] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:18:27.123 [2024-12-16 01:39:57.707446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.123 [2024-12-16 01:39:57.707462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.123 [2024-12-16 01:39:57.707479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.123 [2024-12-16 01:39:57.707538] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.123 [2024-12-16 01:39:57.707547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.123 [2024-12-16 01:39:57.707550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.123 [2024-12-16 01:39:57.707582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.123 [2024-12-16 01:39:57.707601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.123 [2024-12-16 01:39:57.707643] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.123 [2024-12-16 01:39:57.707650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.123 [2024-12-16 01:39:57.707654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.123 [2024-12-16 01:39:57.707684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.123 [2024-12-16 01:39:57.707700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.123 [2024-12-16 01:39:57.707744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.123 [2024-12-16 01:39:57.707751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.123 [2024-12-16 01:39:57.707754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.123 [2024-12-16 01:39:57.707784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.123 [2024-12-16 01:39:57.707801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.123 [2024-12-16 01:39:57.707847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.123 [2024-12-16 01:39:57.707854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.123 [2024-12-16 01:39:57.707857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.123 [2024-12-16 01:39:57.707871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707876] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.123 [2024-12-16 01:39:57.707880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.123 [2024-12-16 01:39:57.707887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.123 [2024-12-16 01:39:57.707903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.123 [2024-12-16 01:39:57.707949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.123 [2024-12-16 01:39:57.707956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.707960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.707964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.707974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.707978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.707982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.124 [2024-12-16 01:39:57.707989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.124 [2024-12-16 01:39:57.708006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.124 [2024-12-16 01:39:57.708046] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.124 [2024-12-16 01:39:57.708053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.708056] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.708070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.124 [2024-12-16 01:39:57.708086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.124 [2024-12-16 01:39:57.708102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.124 [2024-12-16 01:39:57.708148] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.124 [2024-12-16 01:39:57.708155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.708158] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.708172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.124 [2024-12-16 01:39:57.708188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.124 [2024-12-16 01:39:57.708204] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.124 [2024-12-16 01:39:57.708245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.124 [2024-12-16 01:39:57.708252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.708255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708259] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.708269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.124 [2024-12-16 01:39:57.708285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.124 [2024-12-16 01:39:57.708301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.124 [2024-12-16 01:39:57.708342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.124 [2024-12-16 01:39:57.708349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.708352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.708367] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.124 [2024-12-16 01:39:57.708383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.124 [2024-12-16 01:39:57.708399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.124 [2024-12-16 01:39:57.708442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.124 [2024-12-16 01:39:57.708449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.708452] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.708466] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.708475] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.124 [2024-12-16 01:39:57.708482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.124 [2024-12-16 01:39:57.708498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.124 [2024-12-16 01:39:57.712561] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.124 [2024-12-16 01:39:57.712582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.712603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.712608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.712622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.712627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.712631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15cbb00) 00:18:27.124 [2024-12-16 01:39:57.712640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.124 [2024-12-16 01:39:57.712664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1612440, cid 3, qid 0 00:18:27.124 [2024-12-16 01:39:57.712715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.124 [2024-12-16 01:39:57.712722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.124 [2024-12-16 01:39:57.712726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.124 [2024-12-16 01:39:57.712730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1612440) on tqpair=0x15cbb00 00:18:27.124 [2024-12-16 01:39:57.712739] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:18:27.124 00:18:27.124 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:27.124 [2024-12-16 01:39:57.754721] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:27.124 [2024-12-16 01:39:57.754770] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90980 ] 00:18:27.388 [2024-12-16 01:39:57.909560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:18:27.388 [2024-12-16 01:39:57.909629] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:18:27.388 [2024-12-16 01:39:57.909636] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:18:27.388 [2024-12-16 01:39:57.909646] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:18:27.388 [2024-12-16 01:39:57.909654] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:18:27.388 [2024-12-16 01:39:57.909944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:18:27.388 [2024-12-16 01:39:57.910002] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b8ab00 0 00:18:27.388 [2024-12-16 01:39:57.914630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:18:27.388 [2024-12-16 01:39:57.914655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:18:27.388 [2024-12-16 01:39:57.914677] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:18:27.388 [2024-12-16 01:39:57.914681] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:18:27.388 [2024-12-16 01:39:57.914709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.388 [2024-12-16 01:39:57.914716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.388 [2024-12-16 01:39:57.914720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.388 [2024-12-16 01:39:57.914732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:18:27.388 [2024-12-16 01:39:57.914763] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.388 [2024-12-16 01:39:57.922615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.388 [2024-12-16 01:39:57.922636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.388 [2024-12-16 01:39:57.922657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.388 [2024-12-16 01:39:57.922662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.388 [2024-12-16 01:39:57.922672] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:27.388 [2024-12-16 01:39:57.922680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:18:27.388 [2024-12-16 01:39:57.922686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:18:27.388 [2024-12-16 01:39:57.922700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.388 [2024-12-16 01:39:57.922706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.388 [2024-12-16 01:39:57.922710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.388 [2024-12-16 01:39:57.922718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.388 [2024-12-16 01:39:57.922745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.388 [2024-12-16 01:39:57.922801] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.388 [2024-12-16 01:39:57.922808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.922812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.922816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.922821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:18:27.389 [2024-12-16 01:39:57.922828] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:18:27.389 [2024-12-16 01:39:57.922836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.922840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.922844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.922851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.389 [2024-12-16 01:39:57.922884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.389 [2024-12-16 01:39:57.923215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.389 [2024-12-16 01:39:57.923231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.923236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.923246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:18:27.389 [2024-12-16 01:39:57.923255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:18:27.389 [2024-12-16 01:39:57.923263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.923279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.389 [2024-12-16 01:39:57.923299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.389 [2024-12-16 01:39:57.923347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.389 [2024-12-16 01:39:57.923353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.923357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.923367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:27.389 [2024-12-16 01:39:57.923377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.923393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.389 [2024-12-16 01:39:57.923410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.389 [2024-12-16 01:39:57.923763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.389 [2024-12-16 01:39:57.923779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.923784] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.923799] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:18:27.389 [2024-12-16 01:39:57.923805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:18:27.389 [2024-12-16 01:39:57.923814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:27.389 [2024-12-16 01:39:57.923932] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:18:27.389 [2024-12-16 01:39:57.923938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:27.389 [2024-12-16 01:39:57.923947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.923956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.923963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.389 [2024-12-16 01:39:57.923986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.389 [2024-12-16 01:39:57.924309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.389 [2024-12-16 01:39:57.924325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.924329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.924339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:27.389 [2024-12-16 01:39:57.924351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.924367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.389 [2024-12-16 01:39:57.924386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.389 [2024-12-16 01:39:57.924466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.389 [2024-12-16 01:39:57.924473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.924477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.924486] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:27.389 [2024-12-16 01:39:57.924491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:18:27.389 [2024-12-16 01:39:57.924499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:18:27.389 [2024-12-16 01:39:57.924509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:18:27.389 [2024-12-16 01:39:57.924519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.924530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.389 [2024-12-16 01:39:57.924548] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.389 [2024-12-16 01:39:57.924951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.389 [2024-12-16 01:39:57.924967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.389 [2024-12-16 01:39:57.924972] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924976] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=4096, cccid=0 00:18:27.389 [2024-12-16 01:39:57.924982] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd0fc0) on tqpair(0x1b8ab00): expected_datao=0, payload_size=4096 00:18:27.389 [2024-12-16 01:39:57.924986] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924994] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.924999] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.925008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.389 [2024-12-16 01:39:57.925014] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.925018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.925022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.925030] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:18:27.389 [2024-12-16 01:39:57.925036] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:18:27.389 [2024-12-16 01:39:57.925055] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:18:27.389 [2024-12-16 01:39:57.925060] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:18:27.389 [2024-12-16 01:39:57.925065] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:18:27.389 [2024-12-16 01:39:57.925070] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:18:27.389 [2024-12-16 01:39:57.925100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:18:27.389 [2024-12-16 01:39:57.925111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.925116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.925120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.925128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:27.389 [2024-12-16 01:39:57.925150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.389 [2024-12-16 01:39:57.925470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.389 [2024-12-16 01:39:57.925485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.389 [2024-12-16 01:39:57.925489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.925494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.389 [2024-12-16 01:39:57.925502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.925506] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.389 [2024-12-16 01:39:57.925516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b8ab00) 00:18:27.389 [2024-12-16 01:39:57.925523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.389 [2024-12-16 01:39:57.925546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.925551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.925555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.925562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.390 [2024-12-16 01:39:57.925568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.925572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.925576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.925582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.390 [2024-12-16 01:39:57.925588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.925592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.925595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.925601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.390 [2024-12-16 01:39:57.925607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.925621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.925629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.925633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.925640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.390 [2024-12-16 01:39:57.925664] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd0fc0, cid 0, qid 0 00:18:27.390 [2024-12-16 01:39:57.925671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1140, cid 1, qid 0 00:18:27.390 [2024-12-16 01:39:57.925676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd12c0, cid 2, qid 0 00:18:27.390 [2024-12-16 01:39:57.925681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.390 [2024-12-16 01:39:57.925686] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd15c0, cid 4, qid 0 00:18:27.390 [2024-12-16 01:39:57.926281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.390 [2024-12-16 01:39:57.926298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.390 [2024-12-16 01:39:57.926303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.926307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd15c0) on tqpair=0x1b8ab00 00:18:27.390 [2024-12-16 01:39:57.926313] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:18:27.390 [2024-12-16 01:39:57.926320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.926335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.926359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.926382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.926387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.926391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.926398] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:27.390 [2024-12-16 01:39:57.926434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd15c0, cid 4, qid 0 00:18:27.390 [2024-12-16 01:39:57.930604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.390 [2024-12-16 01:39:57.930623] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.390 [2024-12-16 01:39:57.930628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.930649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd15c0) on tqpair=0x1b8ab00 00:18:27.390 [2024-12-16 01:39:57.930710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.930722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.930731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.930736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.930744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.390 [2024-12-16 01:39:57.930768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd15c0, cid 4, qid 0 00:18:27.390 [2024-12-16 01:39:57.930843] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.390 [2024-12-16 01:39:57.930866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.390 [2024-12-16 01:39:57.930870] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.930874] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=4096, cccid=4 00:18:27.390 [2024-12-16 01:39:57.930879] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd15c0) on tqpair(0x1b8ab00): expected_datao=0, payload_size=4096 00:18:27.390 [2024-12-16 01:39:57.930883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.930891] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.930895] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.390 [2024-12-16 01:39:57.931214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.390 [2024-12-16 01:39:57.931218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd15c0) on tqpair=0x1b8ab00 00:18:27.390 [2024-12-16 01:39:57.931238] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:18:27.390 [2024-12-16 01:39:57.931253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.931264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.931273] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.931285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.390 [2024-12-16 01:39:57.931307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd15c0, cid 4, qid 0 00:18:27.390 [2024-12-16 01:39:57.931719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.390 [2024-12-16 01:39:57.931736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.390 [2024-12-16 01:39:57.931741] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931745] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=4096, cccid=4 00:18:27.390 [2024-12-16 01:39:57.931751] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd15c0) on tqpair(0x1b8ab00): expected_datao=0, payload_size=4096 00:18:27.390 [2024-12-16 01:39:57.931756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931763] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931768] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.390 [2024-12-16 01:39:57.931797] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.390 [2024-12-16 01:39:57.931801] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd15c0) on tqpair=0x1b8ab00 00:18:27.390 [2024-12-16 01:39:57.931822] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.931834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.931843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.931847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b8ab00) 00:18:27.390 [2024-12-16 01:39:57.931855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.390 [2024-12-16 01:39:57.931894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd15c0, cid 4, qid 0 00:18:27.390 [2024-12-16 01:39:57.932154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.390 [2024-12-16 01:39:57.932169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.390 [2024-12-16 01:39:57.932174] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.932178] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=4096, cccid=4 00:18:27.390 [2024-12-16 01:39:57.932183] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd15c0) on tqpair(0x1b8ab00): expected_datao=0, payload_size=4096 00:18:27.390 [2024-12-16 01:39:57.932187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.932195] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.932200] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.932208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.390 [2024-12-16 01:39:57.932214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.390 [2024-12-16 01:39:57.932218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.390 [2024-12-16 01:39:57.932222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd15c0) on tqpair=0x1b8ab00 00:18:27.390 [2024-12-16 01:39:57.932230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.932240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.932253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:18:27.390 [2024-12-16 01:39:57.932261] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:27.391 [2024-12-16 01:39:57.932266] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:27.391 [2024-12-16 01:39:57.932272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:18:27.391 [2024-12-16 01:39:57.932278] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:18:27.391 [2024-12-16 01:39:57.932283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:18:27.391 [2024-12-16 01:39:57.932288] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:18:27.391 [2024-12-16 01:39:57.932303] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.932308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.932316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.932324] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.932328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.932331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.932338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:27.391 [2024-12-16 01:39:57.932364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd15c0, cid 4, qid 0 00:18:27.391 [2024-12-16 01:39:57.932372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1740, cid 5, qid 0 00:18:27.391 [2024-12-16 01:39:57.932841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.391 [2024-12-16 01:39:57.932850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.391 [2024-12-16 01:39:57.932854] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.932859] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd15c0) on tqpair=0x1b8ab00 00:18:27.391 [2024-12-16 01:39:57.932866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.391 [2024-12-16 01:39:57.932873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.391 [2024-12-16 01:39:57.932891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.932895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1740) on tqpair=0x1b8ab00 00:18:27.391 [2024-12-16 01:39:57.932917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.932922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.932929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.932964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1740, cid 5, qid 0 00:18:27.391 [2024-12-16 01:39:57.933011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.391 [2024-12-16 01:39:57.933017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.391 [2024-12-16 01:39:57.933021] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933025] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1740) on tqpair=0x1b8ab00 00:18:27.391 [2024-12-16 01:39:57.933034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.933045] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.933061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1740, cid 5, qid 0 00:18:27.391 [2024-12-16 01:39:57.933102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.391 [2024-12-16 01:39:57.933108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.391 [2024-12-16 01:39:57.933112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1740) on tqpair=0x1b8ab00 00:18:27.391 [2024-12-16 01:39:57.933125] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933129] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.933136] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.933151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1740, cid 5, qid 0 00:18:27.391 [2024-12-16 01:39:57.933196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.391 [2024-12-16 01:39:57.933202] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.391 [2024-12-16 01:39:57.933205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933209] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1740) on tqpair=0x1b8ab00 00:18:27.391 [2024-12-16 01:39:57.933226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.933238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.933245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.933256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.933263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.933273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.933280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b8ab00) 00:18:27.391 [2024-12-16 01:39:57.933289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.391 [2024-12-16 01:39:57.933308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1740, cid 5, qid 0 00:18:27.391 [2024-12-16 01:39:57.933315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd15c0, cid 4, qid 0 00:18:27.391 [2024-12-16 01:39:57.933319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd18c0, cid 6, qid 0 00:18:27.391 [2024-12-16 01:39:57.933324] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1a40, cid 7, qid 0 00:18:27.391 [2024-12-16 01:39:57.933451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.391 [2024-12-16 01:39:57.933457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.391 [2024-12-16 01:39:57.933461] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933464] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=8192, cccid=5 00:18:27.391 [2024-12-16 01:39:57.933469] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd1740) on tqpair(0x1b8ab00): expected_datao=0, payload_size=8192 00:18:27.391 [2024-12-16 01:39:57.933473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933489] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933493] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.391 [2024-12-16 01:39:57.933504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.391 [2024-12-16 01:39:57.933508] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933511] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=512, cccid=4 00:18:27.391 [2024-12-16 01:39:57.933516] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd15c0) on tqpair(0x1b8ab00): expected_datao=0, payload_size=512 00:18:27.391 [2024-12-16 01:39:57.933520] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933526] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933529] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.391 [2024-12-16 01:39:57.933540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.391 [2024-12-16 01:39:57.933543] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933547] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=512, cccid=6 00:18:27.391 [2024-12-16 01:39:57.933551] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd18c0) on tqpair(0x1b8ab00): expected_datao=0, payload_size=512 00:18:27.391 [2024-12-16 01:39:57.933555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933562] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933566] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:18:27.391 [2024-12-16 01:39:57.933606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:18:27.391 [2024-12-16 01:39:57.933611] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933614] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b8ab00): datao=0, datal=4096, cccid=7 00:18:27.391 [2024-12-16 01:39:57.933619] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1bd1a40) on tqpair(0x1b8ab00): expected_datao=0, payload_size=4096 00:18:27.391 [2024-12-16 01:39:57.933623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933629] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933633] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.391 [2024-12-16 01:39:57.933648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.391 [2024-12-16 01:39:57.933651] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.391 [2024-12-16 01:39:57.933655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1740) on tqpair=0x1b8ab00 00:18:27.391 ===================================================== 00:18:27.391 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:27.391 ===================================================== 00:18:27.391 Controller Capabilities/Features 00:18:27.391 ================================ 00:18:27.392 Vendor ID: 8086 00:18:27.392 Subsystem Vendor ID: 8086 00:18:27.392 Serial Number: SPDK00000000000001 00:18:27.392 Model Number: SPDK bdev Controller 00:18:27.392 Firmware Version: 25.01 00:18:27.392 Recommended Arb Burst: 6 00:18:27.392 IEEE OUI Identifier: e4 d2 5c 00:18:27.392 Multi-path I/O 00:18:27.392 May have multiple subsystem ports: Yes 00:18:27.392 May have multiple controllers: Yes 00:18:27.392 Associated with SR-IOV VF: No 00:18:27.392 Max Data Transfer Size: 131072 00:18:27.392 Max Number of Namespaces: 32 00:18:27.392 Max Number of I/O Queues: 127 00:18:27.392 NVMe Specification Version (VS): 1.3 00:18:27.392 NVMe Specification Version (Identify): 1.3 00:18:27.392 Maximum Queue Entries: 128 00:18:27.392 Contiguous Queues Required: Yes 00:18:27.392 Arbitration Mechanisms Supported 00:18:27.392 Weighted Round Robin: Not Supported 00:18:27.392 Vendor Specific: Not Supported 00:18:27.392 Reset Timeout: 15000 ms 00:18:27.392 Doorbell Stride: 4 bytes 00:18:27.392 NVM Subsystem Reset: Not Supported 00:18:27.392 Command Sets Supported 00:18:27.392 NVM Command Set: Supported 00:18:27.392 Boot Partition: Not Supported 00:18:27.392 Memory Page Size Minimum: 4096 bytes 00:18:27.392 Memory Page Size Maximum: 4096 bytes 00:18:27.392 Persistent Memory Region: Not Supported 00:18:27.392 Optional Asynchronous Events Supported 00:18:27.392 Namespace Attribute Notices: Supported 00:18:27.392 Firmware Activation Notices: Not Supported 00:18:27.392 ANA Change Notices: Not Supported 00:18:27.392 PLE Aggregate Log Change Notices: Not Supported 00:18:27.392 LBA Status Info Alert Notices: Not Supported 00:18:27.392 EGE Aggregate Log Change Notices: Not Supported 00:18:27.392 Normal NVM Subsystem Shutdown event: Not Supported 00:18:27.392 Zone Descriptor Change Notices: Not Supported 00:18:27.392 Discovery Log Change Notices: Not Supported 00:18:27.392 Controller Attributes 00:18:27.392 128-bit Host Identifier: Supported 00:18:27.392 Non-Operational Permissive Mode: Not Supported 00:18:27.392 NVM Sets: Not Supported 00:18:27.392 Read Recovery Levels: Not Supported 00:18:27.392 Endurance Groups: Not Supported 00:18:27.392 Predictable Latency Mode: Not Supported 00:18:27.392 Traffic Based Keep ALive: Not Supported 00:18:27.392 Namespace Granularity: Not Supported 00:18:27.392 SQ Associations: Not Supported 00:18:27.392 UUID List: Not Supported 00:18:27.392 Multi-Domain Subsystem: Not Supported 00:18:27.392 Fixed Capacity Management: Not Supported 00:18:27.392 Variable Capacity Management: Not Supported 00:18:27.392 Delete Endurance Group: Not Supported 00:18:27.392 Delete NVM Set: Not Supported 00:18:27.392 Extended LBA Formats Supported: Not Supported 00:18:27.392 Flexible Data Placement Supported: Not Supported 00:18:27.392 00:18:27.392 Controller Memory Buffer Support 00:18:27.392 ================================ 00:18:27.392 Supported: No 00:18:27.392 00:18:27.392 Persistent Memory Region Support 00:18:27.392 ================================ 00:18:27.392 Supported: No 00:18:27.392 00:18:27.392 Admin Command Set Attributes 00:18:27.392 ============================ 00:18:27.392 Security Send/Receive: Not Supported 00:18:27.392 Format NVM: Not Supported 00:18:27.392 Firmware Activate/Download: Not Supported 00:18:27.392 Namespace Management: Not Supported 00:18:27.392 Device Self-Test: Not Supported 00:18:27.392 Directives: Not Supported 00:18:27.392 NVMe-MI: Not Supported 00:18:27.392 Virtualization Management: Not Supported 00:18:27.392 Doorbell Buffer Config: Not Supported 00:18:27.392 Get LBA Status Capability: Not Supported 00:18:27.392 Command & Feature Lockdown Capability: Not Supported 00:18:27.392 Abort Command Limit: 4 00:18:27.392 Async Event Request Limit: 4 00:18:27.392 Number of Firmware Slots: N/A 00:18:27.392 Firmware Slot 1 Read-Only: N/A 00:18:27.392 Firmware Activation Without Reset: N/A 00:18:27.392 Multiple Update Detection Support: N/A 00:18:27.392 Firmware Update Granularity: No Information Provided 00:18:27.392 Per-Namespace SMART Log: No 00:18:27.392 Asymmetric Namespace Access Log Page: Not Supported 00:18:27.392 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:27.392 Command Effects Log Page: Supported 00:18:27.392 Get Log Page Extended Data: Supported 00:18:27.392 Telemetry Log Pages: Not Supported 00:18:27.392 Persistent Event Log Pages: Not Supported 00:18:27.392 Supported Log Pages Log Page: May Support 00:18:27.392 Commands Supported & Effects Log Page: Not Supported 00:18:27.392 Feature Identifiers & Effects Log Page:May Support 00:18:27.392 NVMe-MI Commands & Effects Log Page: May Support 00:18:27.392 Data Area 4 for Telemetry Log: Not Supported 00:18:27.392 Error Log Page Entries Supported: 128 00:18:27.392 Keep Alive: Supported 00:18:27.392 Keep Alive Granularity: 10000 ms 00:18:27.392 00:18:27.392 NVM Command Set Attributes 00:18:27.392 ========================== 00:18:27.392 Submission Queue Entry Size 00:18:27.392 Max: 64 00:18:27.392 Min: 64 00:18:27.392 Completion Queue Entry Size 00:18:27.392 Max: 16 00:18:27.392 Min: 16 00:18:27.392 Number of Namespaces: 32 00:18:27.392 Compare Command: Supported 00:18:27.392 Write Uncorrectable Command: Not Supported 00:18:27.392 Dataset Management Command: Supported 00:18:27.392 Write Zeroes Command: Supported 00:18:27.392 Set Features Save Field: Not Supported 00:18:27.392 Reservations: Supported 00:18:27.392 Timestamp: Not Supported 00:18:27.392 Copy: Supported 00:18:27.392 Volatile Write Cache: Present 00:18:27.392 Atomic Write Unit (Normal): 1 00:18:27.392 Atomic Write Unit (PFail): 1 00:18:27.392 Atomic Compare & Write Unit: 1 00:18:27.392 Fused Compare & Write: Supported 00:18:27.392 Scatter-Gather List 00:18:27.392 SGL Command Set: Supported 00:18:27.392 SGL Keyed: Supported 00:18:27.392 SGL Bit Bucket Descriptor: Not Supported 00:18:27.392 SGL Metadata Pointer: Not Supported 00:18:27.392 Oversized SGL: Not Supported 00:18:27.392 SGL Metadata Address: Not Supported 00:18:27.392 SGL Offset: Supported 00:18:27.392 Transport SGL Data Block: Not Supported 00:18:27.392 Replay Protected Memory Block: Not Supported 00:18:27.392 00:18:27.392 Firmware Slot Information 00:18:27.392 ========================= 00:18:27.392 Active slot: 1 00:18:27.392 Slot 1 Firmware Revision: 25.01 00:18:27.392 00:18:27.392 00:18:27.392 Commands Supported and Effects 00:18:27.392 ============================== 00:18:27.392 Admin Commands 00:18:27.392 -------------- 00:18:27.392 Get Log Page (02h): Supported 00:18:27.392 Identify (06h): Supported 00:18:27.392 Abort (08h): Supported 00:18:27.392 Set Features (09h): Supported 00:18:27.392 Get Features (0Ah): Supported 00:18:27.392 Asynchronous Event Request (0Ch): Supported 00:18:27.392 Keep Alive (18h): Supported 00:18:27.392 I/O Commands 00:18:27.392 ------------ 00:18:27.392 Flush (00h): Supported LBA-Change 00:18:27.392 Write (01h): Supported LBA-Change 00:18:27.392 Read (02h): Supported 00:18:27.392 Compare (05h): Supported 00:18:27.392 Write Zeroes (08h): Supported LBA-Change 00:18:27.392 Dataset Management (09h): Supported LBA-Change 00:18:27.392 Copy (19h): Supported LBA-Change 00:18:27.392 00:18:27.392 Error Log 00:18:27.392 ========= 00:18:27.392 00:18:27.392 Arbitration 00:18:27.392 =========== 00:18:27.392 Arbitration Burst: 1 00:18:27.392 00:18:27.392 Power Management 00:18:27.392 ================ 00:18:27.392 Number of Power States: 1 00:18:27.392 Current Power State: Power State #0 00:18:27.392 Power State #0: 00:18:27.392 Max Power: 0.00 W 00:18:27.392 Non-Operational State: Operational 00:18:27.392 Entry Latency: Not Reported 00:18:27.392 Exit Latency: Not Reported 00:18:27.392 Relative Read Throughput: 0 00:18:27.392 Relative Read Latency: 0 00:18:27.392 Relative Write Throughput: 0 00:18:27.392 Relative Write Latency: 0 00:18:27.392 Idle Power: Not Reported 00:18:27.392 Active Power: Not Reported 00:18:27.392 Non-Operational Permissive Mode: Not Supported 00:18:27.392 00:18:27.392 Health Information 00:18:27.392 ================== 00:18:27.392 Critical Warnings: 00:18:27.392 Available Spare Space: OK 00:18:27.392 Temperature: OK 00:18:27.392 Device Reliability: OK 00:18:27.392 Read Only: No 00:18:27.392 Volatile Memory Backup: OK 00:18:27.392 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:27.392 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:27.392 Available Spare: 0% 00:18:27.392 Available Spare Threshold: 0% 00:18:27.392 Life Percentage Used:[2024-12-16 01:39:57.933670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.392 [2024-12-16 01:39:57.933676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.392 [2024-12-16 01:39:57.933680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.392 [2024-12-16 01:39:57.933684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd15c0) on tqpair=0x1b8ab00 00:18:27.392 [2024-12-16 01:39:57.933694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.933700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.933704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.933707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd18c0) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.933715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.933720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.933724] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.933727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1a40) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.933822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.933828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.933835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.933859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1a40, cid 7, qid 0 00:18:27.393 [2024-12-16 01:39:57.934392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.934406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.934410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.934429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1a40) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.934483] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:18:27.393 [2024-12-16 01:39:57.934499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd0fc0) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.934506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.393 [2024-12-16 01:39:57.934512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1140) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.934517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.393 [2024-12-16 01:39:57.934522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd12c0) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.938620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.393 [2024-12-16 01:39:57.938630] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.938634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:27.393 [2024-12-16 01:39:57.938645] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.938661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.938689] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.393 [2024-12-16 01:39:57.938755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.938762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.938765] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938769] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.938777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.938791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.938811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.393 [2024-12-16 01:39:57.938875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.938881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.938885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.938893] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:18:27.393 [2024-12-16 01:39:57.938898] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:18:27.393 [2024-12-16 01:39:57.938907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.938921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.938937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.393 [2024-12-16 01:39:57.938981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.938987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.938991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.938994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.939004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.939020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.939036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.393 [2024-12-16 01:39:57.939076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.939082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.939085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.939098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939102] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.939112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.939127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.393 [2024-12-16 01:39:57.939172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.939178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.939181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.939194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939202] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.939208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.939223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.393 [2024-12-16 01:39:57.939268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.939274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.939277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.393 [2024-12-16 01:39:57.939290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.393 [2024-12-16 01:39:57.939304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.393 [2024-12-16 01:39:57.939319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.393 [2024-12-16 01:39:57.939358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.393 [2024-12-16 01:39:57.939364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.393 [2024-12-16 01:39:57.939367] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.393 [2024-12-16 01:39:57.939371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.939380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.939395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.939411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.939453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.939459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.939463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.939476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.939490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.939505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.939559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.939567] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.939570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939574] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.939584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.939599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.939616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.939661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.939667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.939670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939674] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.939684] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.939698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.939713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.939760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.939766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.939769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.939782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.939797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.939813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.939855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.939861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.939864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.939877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939881] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.939891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.939906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.939951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.939957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.939960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.939973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.939980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.939987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.940002] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.940041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.940047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.940050] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.940063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940070] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.940077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.940092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.940132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.940138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.940141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.940154] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.940169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.940184] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.940229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.940235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.940238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.940251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.940265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.940280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.940324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.940330] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.940334] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.940347] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940351] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940354] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.940361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.940375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.394 [2024-12-16 01:39:57.940422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.394 [2024-12-16 01:39:57.940428] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.394 [2024-12-16 01:39:57.940431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.394 [2024-12-16 01:39:57.940444] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.394 [2024-12-16 01:39:57.940452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.394 [2024-12-16 01:39:57.940458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.394 [2024-12-16 01:39:57.940473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.940512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.940518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.940522] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.940547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.940562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.940580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.940629] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.940635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.940639] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.940652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940659] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.940666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.940681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.940722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.940728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.940732] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940735] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.940745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.940758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.940773] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.940815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.940821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.940824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.940837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.940851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.940866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.940908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.940914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.940917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.940930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.940938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.940945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.940961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.941000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.941006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.941010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.941022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.941036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.941051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.941090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.941096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.941100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941103] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.941113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.941127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.941141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.941183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.941189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.941192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.941205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.941220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.941234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.941273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.941279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.941283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.941296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.941311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.941327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.941368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.941374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.941378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.941391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.941405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.941420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.395 [2024-12-16 01:39:57.941457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.395 [2024-12-16 01:39:57.941463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.395 [2024-12-16 01:39:57.941466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.395 [2024-12-16 01:39:57.941479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.395 [2024-12-16 01:39:57.941486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.395 [2024-12-16 01:39:57.941493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.395 [2024-12-16 01:39:57.941508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.941581] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.941589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.941593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.941607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941612] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.941622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.941639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.941680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.941686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.941690] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.941703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.941718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.941735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.941775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.941781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.941785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941789] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.941798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.941813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.941828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.941868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.941874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.941878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.941891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.941906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.941935] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.941977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.941983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.941986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.941990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.941999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.942013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.942028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.942103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.942111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.942115] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942119] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.942130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.942147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.942166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.942207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.942214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.942217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.942232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.942247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.942264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.942313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.942319] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.942323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.942337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.942352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.942383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.942452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.942458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.942461] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.942474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942481] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.942488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.942503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.942542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.942548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.942551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.942564] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942568] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.942572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b8ab00) 00:18:27.396 [2024-12-16 01:39:57.942580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:27.396 [2024-12-16 01:39:57.946659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1bd1440, cid 3, qid 0 00:18:27.396 [2024-12-16 01:39:57.946714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:18:27.396 [2024-12-16 01:39:57.946721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:18:27.396 [2024-12-16 01:39:57.946725] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:18:27.396 [2024-12-16 01:39:57.946729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1bd1440) on tqpair=0x1b8ab00 00:18:27.396 [2024-12-16 01:39:57.946738] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:18:27.396 0% 00:18:27.396 Data Units Read: 0 00:18:27.396 Data Units Written: 0 00:18:27.396 Host Read Commands: 0 00:18:27.396 Host Write Commands: 0 00:18:27.396 Controller Busy Time: 0 minutes 00:18:27.396 Power Cycles: 0 00:18:27.396 Power On Hours: 0 hours 00:18:27.396 Unsafe Shutdowns: 0 00:18:27.396 Unrecoverable Media Errors: 0 00:18:27.396 Lifetime Error Log Entries: 0 00:18:27.396 Warning Temperature Time: 0 minutes 00:18:27.396 Critical Temperature Time: 0 minutes 00:18:27.396 00:18:27.396 Number of Queues 00:18:27.396 ================ 00:18:27.396 Number of I/O Submission Queues: 127 00:18:27.396 Number of I/O Completion Queues: 127 00:18:27.396 00:18:27.396 Active Namespaces 00:18:27.396 ================= 00:18:27.396 Namespace ID:1 00:18:27.396 Error Recovery Timeout: Unlimited 00:18:27.397 Command Set Identifier: NVM (00h) 00:18:27.397 Deallocate: Supported 00:18:27.397 Deallocated/Unwritten Error: Not Supported 00:18:27.397 Deallocated Read Value: Unknown 00:18:27.397 Deallocate in Write Zeroes: Not Supported 00:18:27.397 Deallocated Guard Field: 0xFFFF 00:18:27.397 Flush: Supported 00:18:27.397 Reservation: Supported 00:18:27.397 Namespace Sharing Capabilities: Multiple Controllers 00:18:27.397 Size (in LBAs): 131072 (0GiB) 00:18:27.397 Capacity (in LBAs): 131072 (0GiB) 00:18:27.397 Utilization (in LBAs): 131072 (0GiB) 00:18:27.397 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:27.397 EUI64: ABCDEF0123456789 00:18:27.397 UUID: f8efe774-030d-4b9c-b58d-85ecd952178d 00:18:27.397 Thin Provisioning: Not Supported 00:18:27.397 Per-NS Atomic Units: Yes 00:18:27.397 Atomic Boundary Size (Normal): 0 00:18:27.397 Atomic Boundary Size (PFail): 0 00:18:27.397 Atomic Boundary Offset: 0 00:18:27.397 Maximum Single Source Range Length: 65535 00:18:27.397 Maximum Copy Length: 65535 00:18:27.397 Maximum Source Range Count: 1 00:18:27.397 NGUID/EUI64 Never Reused: No 00:18:27.397 Namespace Write Protected: No 00:18:27.397 Number of LBA Formats: 1 00:18:27.397 Current LBA Format: LBA Format #00 00:18:27.397 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:27.397 00:18:27.397 01:39:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:27.397 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:27.397 rmmod nvme_tcp 00:18:27.677 rmmod nvme_fabrics 00:18:27.677 rmmod nvme_keyring 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 90951 ']' 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 90951 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 90951 ']' 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 90951 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90951 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90951' 00:18:27.677 killing process with pid 90951 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 90951 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 90951 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:27.677 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:27.678 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:27.678 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:27.678 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:27.678 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:27.977 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:27.977 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:27.977 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:18:27.978 00:18:27.978 real 0m2.127s 00:18:27.978 user 0m4.213s 00:18:27.978 sys 0m0.720s 00:18:27.978 ************************************ 00:18:27.978 END TEST nvmf_identify 00:18:27.978 ************************************ 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.978 ************************************ 00:18:27.978 START TEST nvmf_perf 00:18:27.978 ************************************ 00:18:27.978 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:18:28.241 * Looking for test storage... 00:18:28.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:28.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.241 --rc genhtml_branch_coverage=1 00:18:28.241 --rc genhtml_function_coverage=1 00:18:28.241 --rc genhtml_legend=1 00:18:28.241 --rc geninfo_all_blocks=1 00:18:28.241 --rc geninfo_unexecuted_blocks=1 00:18:28.241 00:18:28.241 ' 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:28.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.241 --rc genhtml_branch_coverage=1 00:18:28.241 --rc genhtml_function_coverage=1 00:18:28.241 --rc genhtml_legend=1 00:18:28.241 --rc geninfo_all_blocks=1 00:18:28.241 --rc geninfo_unexecuted_blocks=1 00:18:28.241 00:18:28.241 ' 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:28.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.241 --rc genhtml_branch_coverage=1 00:18:28.241 --rc genhtml_function_coverage=1 00:18:28.241 --rc genhtml_legend=1 00:18:28.241 --rc geninfo_all_blocks=1 00:18:28.241 --rc geninfo_unexecuted_blocks=1 00:18:28.241 00:18:28.241 ' 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:28.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.241 --rc genhtml_branch_coverage=1 00:18:28.241 --rc genhtml_function_coverage=1 00:18:28.241 --rc genhtml_legend=1 00:18:28.241 --rc geninfo_all_blocks=1 00:18:28.241 --rc geninfo_unexecuted_blocks=1 00:18:28.241 00:18:28.241 ' 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.241 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:28.242 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:28.242 Cannot find device "nvmf_init_br" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:28.242 Cannot find device "nvmf_init_br2" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:28.242 Cannot find device "nvmf_tgt_br" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:28.242 Cannot find device "nvmf_tgt_br2" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:28.242 Cannot find device "nvmf_init_br" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:28.242 Cannot find device "nvmf_init_br2" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:28.242 Cannot find device "nvmf_tgt_br" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:28.242 Cannot find device "nvmf_tgt_br2" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:28.242 Cannot find device "nvmf_br" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:28.242 Cannot find device "nvmf_init_if" 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:18:28.242 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:28.501 Cannot find device "nvmf_init_if2" 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:28.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:28.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:28.501 01:39:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:28.501 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:28.502 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:28.502 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:28.502 00:18:28.502 --- 10.0.0.3 ping statistics --- 00:18:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.502 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:28.502 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:28.502 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:18:28.502 00:18:28.502 --- 10.0.0.4 ping statistics --- 00:18:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.502 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:28.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:28.502 00:18:28.502 --- 10.0.0.1 ping statistics --- 00:18:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.502 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:28.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:18:28.502 00:18:28.502 --- 10.0.0.2 ping statistics --- 00:18:28.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.502 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.502 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=91201 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 91201 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 91201 ']' 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.761 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:28.761 [2024-12-16 01:39:59.253713] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:28.761 [2024-12-16 01:39:59.253807] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.761 [2024-12-16 01:39:59.408074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:29.020 [2024-12-16 01:39:59.434432] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.020 [2024-12-16 01:39:59.434498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.020 [2024-12-16 01:39:59.434513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.020 [2024-12-16 01:39:59.434543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.020 [2024-12-16 01:39:59.434555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.020 [2024-12-16 01:39:59.435465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.020 [2024-12-16 01:39:59.437573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.020 [2024-12-16 01:39:59.437736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.020 [2024-12-16 01:39:59.441551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.020 [2024-12-16 01:39:59.475592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:29.020 01:39:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:18:29.588 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:18:29.588 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:29.847 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:18:29.847 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:30.105 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:30.105 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:30.105 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:30.105 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:30.105 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:30.364 [2024-12-16 01:40:00.844946] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.364 01:40:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:30.623 01:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:30.623 01:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:30.881 01:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:30.881 01:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:31.139 01:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:31.398 [2024-12-16 01:40:01.810100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:31.398 01:40:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:31.657 01:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:31.657 01:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:31.657 01:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:31.657 01:40:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:32.592 Initializing NVMe Controllers 00:18:32.592 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:32.592 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:32.592 Initialization complete. Launching workers. 00:18:32.592 ======================================================== 00:18:32.592 Latency(us) 00:18:32.592 Device Information : IOPS MiB/s Average min max 00:18:32.592 PCIE (0000:00:10.0) NSID 1 from core 0: 23483.70 91.73 1366.11 354.24 7589.31 00:18:32.592 ======================================================== 00:18:32.592 Total : 23483.70 91.73 1366.11 354.24 7589.31 00:18:32.592 00:18:32.592 01:40:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:33.968 Initializing NVMe Controllers 00:18:33.968 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:33.968 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:33.968 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:33.968 Initialization complete. Launching workers. 00:18:33.968 ======================================================== 00:18:33.968 Latency(us) 00:18:33.968 Device Information : IOPS MiB/s Average min max 00:18:33.968 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3846.37 15.02 259.63 95.63 7176.73 00:18:33.968 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.88 0.48 8128.02 5025.13 12038.33 00:18:33.968 ======================================================== 00:18:33.968 Total : 3970.25 15.51 505.14 95.63 12038.33 00:18:33.968 00:18:33.968 01:40:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:35.350 Initializing NVMe Controllers 00:18:35.350 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:35.350 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:35.350 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:35.350 Initialization complete. Launching workers. 00:18:35.350 ======================================================== 00:18:35.350 Latency(us) 00:18:35.350 Device Information : IOPS MiB/s Average min max 00:18:35.350 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9223.98 36.03 3472.56 527.13 10407.13 00:18:35.350 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3967.99 15.50 8111.49 6147.44 13154.62 00:18:35.350 ======================================================== 00:18:35.350 Total : 13191.97 51.53 4867.90 527.13 13154.62 00:18:35.350 00:18:35.350 01:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:35.350 01:40:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:37.883 Initializing NVMe Controllers 00:18:37.883 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:37.883 Controller IO queue size 128, less than required. 00:18:37.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:37.883 Controller IO queue size 128, less than required. 00:18:37.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:37.883 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:37.883 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:37.883 Initialization complete. Launching workers. 00:18:37.883 ======================================================== 00:18:37.883 Latency(us) 00:18:37.883 Device Information : IOPS MiB/s Average min max 00:18:37.883 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1989.49 497.37 64912.10 37952.88 114465.66 00:18:37.883 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 666.00 166.50 198754.40 48831.21 343242.44 00:18:37.883 ======================================================== 00:18:37.883 Total : 2655.49 663.87 98479.78 37952.88 343242.44 00:18:37.883 00:18:37.883 01:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:38.142 Initializing NVMe Controllers 00:18:38.142 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:38.142 Controller IO queue size 128, less than required. 00:18:38.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:38.142 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:38.142 Controller IO queue size 128, less than required. 00:18:38.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:38.142 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:38.142 WARNING: Some requested NVMe devices were skipped 00:18:38.142 No valid NVMe controllers or AIO or URING devices found 00:18:38.401 01:40:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:40.959 Initializing NVMe Controllers 00:18:40.959 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:40.959 Controller IO queue size 128, less than required. 00:18:40.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.959 Controller IO queue size 128, less than required. 00:18:40.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.959 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:40.959 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:40.959 Initialization complete. Launching workers. 00:18:40.959 00:18:40.959 ==================== 00:18:40.959 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:40.959 TCP transport: 00:18:40.959 polls: 9479 00:18:40.959 idle_polls: 3693 00:18:40.959 sock_completions: 5786 00:18:40.959 nvme_completions: 7269 00:18:40.959 submitted_requests: 10894 00:18:40.959 queued_requests: 1 00:18:40.959 00:18:40.959 ==================== 00:18:40.959 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:40.959 TCP transport: 00:18:40.959 polls: 9716 00:18:40.959 idle_polls: 4973 00:18:40.959 sock_completions: 4743 00:18:40.959 nvme_completions: 6965 00:18:40.959 submitted_requests: 10420 00:18:40.959 queued_requests: 1 00:18:40.959 ======================================================== 00:18:40.959 Latency(us) 00:18:40.959 Device Information : IOPS MiB/s Average min max 00:18:40.959 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1815.70 453.93 72176.24 40160.67 103344.01 00:18:40.959 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1739.76 434.94 74119.56 30116.04 114204.58 00:18:40.959 ======================================================== 00:18:40.959 Total : 3555.46 888.86 73127.14 30116.04 114204.58 00:18:40.959 00:18:40.959 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:40.959 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.218 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:41.218 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:41.218 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:41.477 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=701efeaf-d448-4be9-819d-28fb855dadb9 00:18:41.477 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 701efeaf-d448-4be9-819d-28fb855dadb9 00:18:41.477 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=701efeaf-d448-4be9-819d-28fb855dadb9 00:18:41.477 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:41.477 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:41.477 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:41.477 01:40:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:41.736 { 00:18:41.736 "uuid": "701efeaf-d448-4be9-819d-28fb855dadb9", 00:18:41.736 "name": "lvs_0", 00:18:41.736 "base_bdev": "Nvme0n1", 00:18:41.736 "total_data_clusters": 1278, 00:18:41.736 "free_clusters": 1278, 00:18:41.736 "block_size": 4096, 00:18:41.736 "cluster_size": 4194304 00:18:41.736 } 00:18:41.736 ]' 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="701efeaf-d448-4be9-819d-28fb855dadb9") .free_clusters' 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="701efeaf-d448-4be9-819d-28fb855dadb9") .cluster_size' 00:18:41.736 5112 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:41.736 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 701efeaf-d448-4be9-819d-28fb855dadb9 lbd_0 5112 00:18:42.302 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=cb0b6aac-50b1-4969-b3ce-eec654a38cb0 00:18:42.302 01:40:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore cb0b6aac-50b1-4969-b3ce-eec654a38cb0 lvs_n_0 00:18:42.561 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=40866109-a9e2-4346-ac09-b38c3cf5771c 00:18:42.561 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 40866109-a9e2-4346-ac09-b38c3cf5771c 00:18:42.561 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=40866109-a9e2-4346-ac09-b38c3cf5771c 00:18:42.561 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:42.561 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:42.561 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:42.561 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:42.819 { 00:18:42.819 "uuid": "701efeaf-d448-4be9-819d-28fb855dadb9", 00:18:42.819 "name": "lvs_0", 00:18:42.819 "base_bdev": "Nvme0n1", 00:18:42.819 "total_data_clusters": 1278, 00:18:42.819 "free_clusters": 0, 00:18:42.819 "block_size": 4096, 00:18:42.819 "cluster_size": 4194304 00:18:42.819 }, 00:18:42.819 { 00:18:42.819 "uuid": "40866109-a9e2-4346-ac09-b38c3cf5771c", 00:18:42.819 "name": "lvs_n_0", 00:18:42.819 "base_bdev": "cb0b6aac-50b1-4969-b3ce-eec654a38cb0", 00:18:42.819 "total_data_clusters": 1276, 00:18:42.819 "free_clusters": 1276, 00:18:42.819 "block_size": 4096, 00:18:42.819 "cluster_size": 4194304 00:18:42.819 } 00:18:42.819 ]' 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="40866109-a9e2-4346-ac09-b38c3cf5771c") .free_clusters' 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="40866109-a9e2-4346-ac09-b38c3cf5771c") .cluster_size' 00:18:42.819 5104 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:42.819 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 40866109-a9e2-4346-ac09-b38c3cf5771c lbd_nest_0 5104 00:18:43.078 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=344b2a58-be46-471e-9c68-45ff081cd341 00:18:43.078 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:43.336 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:43.336 01:40:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 344b2a58-be46-471e-9c68-45ff081cd341 00:18:43.595 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:43.853 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:43.853 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:43.853 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:43.854 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:43.854 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:44.421 Initializing NVMe Controllers 00:18:44.421 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:44.421 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:44.421 WARNING: Some requested NVMe devices were skipped 00:18:44.421 No valid NVMe controllers or AIO or URING devices found 00:18:44.421 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:44.421 01:40:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:56.626 Initializing NVMe Controllers 00:18:56.626 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:56.626 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:56.626 Initialization complete. Launching workers. 00:18:56.626 ======================================================== 00:18:56.626 Latency(us) 00:18:56.626 Device Information : IOPS MiB/s Average min max 00:18:56.626 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 960.81 120.10 1039.86 319.60 8160.00 00:18:56.626 ======================================================== 00:18:56.626 Total : 960.81 120.10 1039.86 319.60 8160.00 00:18:56.626 00:18:56.626 01:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:56.626 01:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:56.626 01:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:56.626 Initializing NVMe Controllers 00:18:56.626 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:56.626 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:56.626 WARNING: Some requested NVMe devices were skipped 00:18:56.626 No valid NVMe controllers or AIO or URING devices found 00:18:56.626 01:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:56.626 01:40:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:06.618 Initializing NVMe Controllers 00:19:06.618 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:06.618 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:06.618 Initialization complete. Launching workers. 00:19:06.618 ======================================================== 00:19:06.618 Latency(us) 00:19:06.618 Device Information : IOPS MiB/s Average min max 00:19:06.618 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1345.78 168.22 23804.79 6373.87 63658.35 00:19:06.618 ======================================================== 00:19:06.618 Total : 1345.78 168.22 23804.79 6373.87 63658.35 00:19:06.618 00:19:06.618 01:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:19:06.618 01:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:06.618 01:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:06.618 Initializing NVMe Controllers 00:19:06.618 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:06.618 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:19:06.618 WARNING: Some requested NVMe devices were skipped 00:19:06.618 No valid NVMe controllers or AIO or URING devices found 00:19:06.618 01:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:19:06.618 01:40:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:16.606 Initializing NVMe Controllers 00:19:16.606 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:16.606 Controller IO queue size 128, less than required. 00:19:16.606 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:16.606 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:16.606 Initialization complete. Launching workers. 00:19:16.606 ======================================================== 00:19:16.606 Latency(us) 00:19:16.606 Device Information : IOPS MiB/s Average min max 00:19:16.606 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4087.87 510.98 31324.75 12313.44 67176.38 00:19:16.606 ======================================================== 00:19:16.606 Total : 4087.87 510.98 31324.75 12313.44 67176.38 00:19:16.606 00:19:16.606 01:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.606 01:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 344b2a58-be46-471e-9c68-45ff081cd341 00:19:16.606 01:40:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:16.864 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete cb0b6aac-50b1-4969-b3ce-eec654a38cb0 00:19:16.864 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:17.122 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:17.122 rmmod nvme_tcp 00:19:17.381 rmmod nvme_fabrics 00:19:17.381 rmmod nvme_keyring 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 91201 ']' 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 91201 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 91201 ']' 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 91201 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91201 00:19:17.381 killing process with pid 91201 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91201' 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 91201 00:19:17.381 01:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 91201 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:19:19.309 00:19:19.309 real 0m51.224s 00:19:19.309 user 3m13.188s 00:19:19.309 sys 0m12.105s 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:19:19.309 ************************************ 00:19:19.309 END TEST nvmf_perf 00:19:19.309 ************************************ 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.309 ************************************ 00:19:19.309 START TEST nvmf_fio_host 00:19:19.309 ************************************ 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:19:19.309 * Looking for test storage... 00:19:19.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:19.309 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:19.584 01:40:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.584 --rc genhtml_branch_coverage=1 00:19:19.584 --rc genhtml_function_coverage=1 00:19:19.584 --rc genhtml_legend=1 00:19:19.584 --rc geninfo_all_blocks=1 00:19:19.584 --rc geninfo_unexecuted_blocks=1 00:19:19.584 00:19:19.584 ' 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.584 --rc genhtml_branch_coverage=1 00:19:19.584 --rc genhtml_function_coverage=1 00:19:19.584 --rc genhtml_legend=1 00:19:19.584 --rc geninfo_all_blocks=1 00:19:19.584 --rc geninfo_unexecuted_blocks=1 00:19:19.584 00:19:19.584 ' 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.584 --rc genhtml_branch_coverage=1 00:19:19.584 --rc genhtml_function_coverage=1 00:19:19.584 --rc genhtml_legend=1 00:19:19.584 --rc geninfo_all_blocks=1 00:19:19.584 --rc geninfo_unexecuted_blocks=1 00:19:19.584 00:19:19.584 ' 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:19.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:19.584 --rc genhtml_branch_coverage=1 00:19:19.584 --rc genhtml_function_coverage=1 00:19:19.584 --rc genhtml_legend=1 00:19:19.584 --rc geninfo_all_blocks=1 00:19:19.584 --rc geninfo_unexecuted_blocks=1 00:19:19.584 00:19:19.584 ' 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.584 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:19.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:19.585 Cannot find device "nvmf_init_br" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:19.585 Cannot find device "nvmf_init_br2" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:19.585 Cannot find device "nvmf_tgt_br" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:19.585 Cannot find device "nvmf_tgt_br2" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:19.585 Cannot find device "nvmf_init_br" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:19.585 Cannot find device "nvmf_init_br2" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:19.585 Cannot find device "nvmf_tgt_br" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:19.585 Cannot find device "nvmf_tgt_br2" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:19.585 Cannot find device "nvmf_br" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:19.585 Cannot find device "nvmf_init_if" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:19.585 Cannot find device "nvmf_init_if2" 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:19.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:19.585 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:19.585 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:19.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:19.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:19:19.844 00:19:19.844 --- 10.0.0.3 ping statistics --- 00:19:19.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.844 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:19.844 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:19.844 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:19:19.844 00:19:19.844 --- 10.0.0.4 ping statistics --- 00:19:19.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.844 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:19.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:19.844 00:19:19.844 --- 10.0.0.1 ping statistics --- 00:19:19.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.844 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:19.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:19.844 00:19:19.844 --- 10.0.0.2 ping statistics --- 00:19:19.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.844 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=92060 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 92060 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 92060 ']' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.844 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.845 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.103 [2024-12-16 01:40:50.503061] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:20.103 [2024-12-16 01:40:50.503161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.103 [2024-12-16 01:40:50.653988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.103 [2024-12-16 01:40:50.677567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.103 [2024-12-16 01:40:50.677626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.103 [2024-12-16 01:40:50.677640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.103 [2024-12-16 01:40:50.677650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.103 [2024-12-16 01:40:50.677659] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.103 [2024-12-16 01:40:50.678550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.103 [2024-12-16 01:40:50.678678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.103 [2024-12-16 01:40:50.678810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:20.103 [2024-12-16 01:40:50.678817] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.103 [2024-12-16 01:40:50.711797] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.371 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.371 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:19:20.371 01:40:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:20.631 [2024-12-16 01:40:51.035660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.631 01:40:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:19:20.631 01:40:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:20.631 01:40:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.631 01:40:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:20.889 Malloc1 00:19:20.889 01:40:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.147 01:40:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:21.406 01:40:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:21.664 [2024-12-16 01:40:52.148473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:21.664 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:19:21.922 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:21.922 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:21.923 01:40:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:21.923 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:21.923 fio-3.35 00:19:21.923 Starting 1 thread 00:19:24.453 00:19:24.453 test: (groupid=0, jobs=1): err= 0: pid=92130: Mon Dec 16 01:40:54 2024 00:19:24.453 read: IOPS=9494, BW=37.1MiB/s (38.9MB/s)(74.4MiB/2006msec) 00:19:24.453 slat (nsec): min=1776, max=305268, avg=2225.78, stdev=3136.42 00:19:24.453 clat (usec): min=2523, max=12127, avg=7017.90, stdev=565.56 00:19:24.453 lat (usec): min=2577, max=12129, avg=7020.13, stdev=565.42 00:19:24.453 clat percentiles (usec): 00:19:24.453 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:19:24.453 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:19:24.453 | 70.00th=[ 7242], 80.00th=[ 7373], 90.00th=[ 7635], 95.00th=[ 7898], 00:19:24.453 | 99.00th=[ 8455], 99.50th=[ 9372], 99.90th=[11207], 99.95th=[11338], 00:19:24.453 | 99.99th=[12125] 00:19:24.453 bw ( KiB/s): min=36752, max=38616, per=99.98%, avg=37970.00, stdev=831.62, samples=4 00:19:24.453 iops : min= 9188, max= 9654, avg=9492.50, stdev=207.91, samples=4 00:19:24.453 write: IOPS=9503, BW=37.1MiB/s (38.9MB/s)(74.5MiB/2006msec); 0 zone resets 00:19:24.453 slat (nsec): min=1840, max=220965, avg=2300.63, stdev=2251.36 00:19:24.453 clat (usec): min=2383, max=11947, avg=6401.50, stdev=511.82 00:19:24.453 lat (usec): min=2396, max=11949, avg=6403.80, stdev=511.78 00:19:24.453 clat percentiles (usec): 00:19:24.453 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:19:24.453 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6456], 00:19:24.453 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7177], 00:19:24.453 | 99.00th=[ 7635], 99.50th=[ 8225], 99.90th=[10552], 99.95th=[11207], 00:19:24.453 | 99.99th=[11863] 00:19:24.453 bw ( KiB/s): min=37632, max=38376, per=99.95%, avg=37996.00, stdev=416.03, samples=4 00:19:24.453 iops : min= 9408, max= 9594, avg=9499.00, stdev=104.01, samples=4 00:19:24.453 lat (msec) : 4=0.08%, 10=99.64%, 20=0.28% 00:19:24.453 cpu : usr=71.47%, sys=21.95%, ctx=9, majf=0, minf=7 00:19:24.453 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:24.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.453 issued rwts: total=19046,19064,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.453 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.453 00:19:24.453 Run status group 0 (all jobs): 00:19:24.453 READ: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.4MiB (78.0MB), run=2006-2006msec 00:19:24.453 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.5MiB (78.1MB), run=2006-2006msec 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:24.453 01:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:19:24.453 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:19:24.453 fio-3.35 00:19:24.453 Starting 1 thread 00:19:26.983 00:19:26.983 test: (groupid=0, jobs=1): err= 0: pid=92173: Mon Dec 16 01:40:57 2024 00:19:26.983 read: IOPS=8906, BW=139MiB/s (146MB/s)(279MiB/2007msec) 00:19:26.983 slat (usec): min=2, max=117, avg= 3.50, stdev= 2.31 00:19:26.983 clat (usec): min=2614, max=15884, avg=8005.37, stdev=2407.38 00:19:26.983 lat (usec): min=2617, max=15887, avg=8008.87, stdev=2407.45 00:19:26.983 clat percentiles (usec): 00:19:26.983 | 1.00th=[ 3752], 5.00th=[ 4490], 10.00th=[ 5080], 20.00th=[ 5800], 00:19:26.983 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7767], 60.00th=[ 8455], 00:19:26.983 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[12518], 00:19:26.983 | 99.00th=[14353], 99.50th=[14877], 99.90th=[15401], 99.95th=[15533], 00:19:26.983 | 99.99th=[15795] 00:19:26.983 bw ( KiB/s): min=65632, max=75968, per=49.78%, avg=70938.25, stdev=5421.53, samples=4 00:19:26.983 iops : min= 4102, max= 4748, avg=4433.50, stdev=338.70, samples=4 00:19:26.983 write: IOPS=5144, BW=80.4MiB/s (84.3MB/s)(144MiB/1796msec); 0 zone resets 00:19:26.983 slat (usec): min=31, max=359, avg=36.30, stdev= 9.15 00:19:26.983 clat (usec): min=3798, max=19629, avg=11370.93, stdev=2238.10 00:19:26.983 lat (usec): min=3830, max=19662, avg=11407.23, stdev=2239.81 00:19:26.983 clat percentiles (usec): 00:19:26.983 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9372], 00:19:26.983 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11863], 00:19:26.983 | 70.00th=[12518], 80.00th=[13304], 90.00th=[14484], 95.00th=[15401], 00:19:26.983 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18220], 99.95th=[18482], 00:19:26.983 | 99.99th=[19530] 00:19:26.983 bw ( KiB/s): min=68608, max=78720, per=89.75%, avg=73880.75, stdev=5360.88, samples=4 00:19:26.983 iops : min= 4288, max= 4920, avg=4617.50, stdev=335.00, samples=4 00:19:26.983 lat (msec) : 4=1.44%, 10=62.13%, 20=36.43% 00:19:26.983 cpu : usr=84.15%, sys=12.06%, ctx=3, majf=0, minf=3 00:19:26.983 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:26.983 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.983 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.983 issued rwts: total=17875,9240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.983 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.983 00:19:26.983 Run status group 0 (all jobs): 00:19:26.983 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=279MiB (293MB), run=2007-2007msec 00:19:26.983 WRITE: bw=80.4MiB/s (84.3MB/s), 80.4MiB/s-80.4MiB/s (84.3MB/s-84.3MB/s), io=144MiB (151MB), run=1796-1796msec 00:19:26.983 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:19:27.242 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:19:27.500 Nvme0n1 00:19:27.500 01:40:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:19:27.758 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=995eedd5-0eea-4ff4-80dc-ae826b744d40 00:19:27.758 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 995eedd5-0eea-4ff4-80dc-ae826b744d40 00:19:27.758 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=995eedd5-0eea-4ff4-80dc-ae826b744d40 00:19:27.758 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:27.758 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:27.758 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:27.758 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:28.016 { 00:19:28.016 "uuid": "995eedd5-0eea-4ff4-80dc-ae826b744d40", 00:19:28.016 "name": "lvs_0", 00:19:28.016 "base_bdev": "Nvme0n1", 00:19:28.016 "total_data_clusters": 4, 00:19:28.016 "free_clusters": 4, 00:19:28.016 "block_size": 4096, 00:19:28.016 "cluster_size": 1073741824 00:19:28.016 } 00:19:28.016 ]' 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="995eedd5-0eea-4ff4-80dc-ae826b744d40") .free_clusters' 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="995eedd5-0eea-4ff4-80dc-ae826b744d40") .cluster_size' 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:19:28.016 4096 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:19:28.016 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:19:28.275 5bd6e53c-eafb-434e-8fca-08eaded64a08 00:19:28.275 01:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:19:28.533 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:19:28.791 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:29.049 01:40:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:29.308 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:29.308 fio-3.35 00:19:29.308 Starting 1 thread 00:19:31.846 00:19:31.846 test: (groupid=0, jobs=1): err= 0: pid=92282: Mon Dec 16 01:41:02 2024 00:19:31.846 read: IOPS=6239, BW=24.4MiB/s (25.6MB/s)(48.9MiB/2008msec) 00:19:31.846 slat (nsec): min=1860, max=302592, avg=2693.83, stdev=3881.88 00:19:31.846 clat (usec): min=3027, max=18433, avg=10726.48, stdev=903.96 00:19:31.846 lat (usec): min=3037, max=18435, avg=10729.17, stdev=903.66 00:19:31.846 clat percentiles (usec): 00:19:31.846 | 1.00th=[ 8848], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:19:31.846 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:19:31.846 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:19:31.846 | 99.00th=[12780], 99.50th=[13042], 99.90th=[17433], 99.95th=[17433], 00:19:31.846 | 99.99th=[18482] 00:19:31.846 bw ( KiB/s): min=23960, max=25480, per=99.82%, avg=24914.00, stdev=668.48, samples=4 00:19:31.846 iops : min= 5990, max= 6370, avg=6228.50, stdev=167.12, samples=4 00:19:31.846 write: IOPS=6231, BW=24.3MiB/s (25.5MB/s)(48.9MiB/2008msec); 0 zone resets 00:19:31.846 slat (nsec): min=1905, max=248780, avg=2746.72, stdev=3036.60 00:19:31.846 clat (usec): min=2359, max=18109, avg=9721.90, stdev=845.62 00:19:31.846 lat (usec): min=2373, max=18112, avg=9724.65, stdev=845.49 00:19:31.846 clat percentiles (usec): 00:19:31.846 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:19:31.846 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:19:31.846 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:19:31.846 | 99.00th=[11600], 99.50th=[11863], 99.90th=[16319], 99.95th=[16581], 00:19:31.846 | 99.99th=[17957] 00:19:31.846 bw ( KiB/s): min=24784, max=25008, per=99.96%, avg=24914.00, stdev=93.84, samples=4 00:19:31.846 iops : min= 6196, max= 6252, avg=6228.50, stdev=23.46, samples=4 00:19:31.846 lat (msec) : 4=0.06%, 10=41.44%, 20=58.50% 00:19:31.846 cpu : usr=74.49%, sys=19.98%, ctx=26, majf=0, minf=7 00:19:31.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:31.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:31.846 issued rwts: total=12529,12512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:31.846 00:19:31.846 Run status group 0 (all jobs): 00:19:31.846 READ: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=48.9MiB (51.3MB), run=2008-2008msec 00:19:31.846 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.2MB), run=2008-2008msec 00:19:31.846 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:31.846 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:32.105 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9f9ac100-a2b1-47c4-af86-070766ad4844 00:19:32.105 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9f9ac100-a2b1-47c4-af86-070766ad4844 00:19:32.105 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=9f9ac100-a2b1-47c4-af86-070766ad4844 00:19:32.105 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:32.105 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:32.105 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:32.105 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:32.363 { 00:19:32.363 "uuid": "995eedd5-0eea-4ff4-80dc-ae826b744d40", 00:19:32.363 "name": "lvs_0", 00:19:32.363 "base_bdev": "Nvme0n1", 00:19:32.363 "total_data_clusters": 4, 00:19:32.363 "free_clusters": 0, 00:19:32.363 "block_size": 4096, 00:19:32.363 "cluster_size": 1073741824 00:19:32.363 }, 00:19:32.363 { 00:19:32.363 "uuid": "9f9ac100-a2b1-47c4-af86-070766ad4844", 00:19:32.363 "name": "lvs_n_0", 00:19:32.363 "base_bdev": "5bd6e53c-eafb-434e-8fca-08eaded64a08", 00:19:32.363 "total_data_clusters": 1022, 00:19:32.363 "free_clusters": 1022, 00:19:32.363 "block_size": 4096, 00:19:32.363 "cluster_size": 4194304 00:19:32.363 } 00:19:32.363 ]' 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9f9ac100-a2b1-47c4-af86-070766ad4844") .free_clusters' 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9f9ac100-a2b1-47c4-af86-070766ad4844") .cluster_size' 00:19:32.363 4088 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:19:32.363 01:41:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:32.622 0c79e736-48c9-495d-8161-af5a9a9b9453 00:19:32.622 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:32.880 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:33.138 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:33.396 01:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:33.396 01:41:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:33.396 01:41:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:33.396 01:41:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:33.396 01:41:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:33.655 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:33.655 fio-3.35 00:19:33.655 Starting 1 thread 00:19:36.186 00:19:36.186 test: (groupid=0, jobs=1): err= 0: pid=92361: Mon Dec 16 01:41:06 2024 00:19:36.186 read: IOPS=5689, BW=22.2MiB/s (23.3MB/s)(44.7MiB/2010msec) 00:19:36.186 slat (nsec): min=1938, max=314699, avg=2810.29, stdev=4231.49 00:19:36.186 clat (usec): min=3383, max=21584, avg=11782.64, stdev=958.99 00:19:36.186 lat (usec): min=3392, max=21587, avg=11785.45, stdev=958.67 00:19:36.186 clat percentiles (usec): 00:19:36.186 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10683], 20.00th=[11076], 00:19:36.186 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:19:36.186 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:19:36.186 | 99.00th=[13829], 99.50th=[14222], 99.90th=[17171], 99.95th=[18744], 00:19:36.186 | 99.99th=[20317] 00:19:36.186 bw ( KiB/s): min=21816, max=23208, per=100.00%, avg=22766.00, stdev=641.38, samples=4 00:19:36.186 iops : min= 5454, max= 5802, avg=5691.50, stdev=160.34, samples=4 00:19:36.186 write: IOPS=5669, BW=22.1MiB/s (23.2MB/s)(44.5MiB/2010msec); 0 zone resets 00:19:36.186 slat (nsec): min=1969, max=356326, avg=2883.26, stdev=3997.15 00:19:36.186 clat (usec): min=2504, max=20295, avg=10675.63, stdev=943.98 00:19:36.186 lat (usec): min=2518, max=20297, avg=10678.52, stdev=943.85 00:19:36.186 clat percentiles (usec): 00:19:36.186 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:19:36.186 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[10814], 00:19:36.186 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:19:36.186 | 99.00th=[12780], 99.50th=[13304], 99.90th=[18744], 99.95th=[19006], 00:19:36.186 | 99.99th=[20317] 00:19:36.186 bw ( KiB/s): min=22336, max=22848, per=99.85%, avg=22642.00, stdev=219.81, samples=4 00:19:36.186 iops : min= 5584, max= 5712, avg=5660.50, stdev=54.95, samples=4 00:19:36.186 lat (msec) : 4=0.05%, 10=11.40%, 20=88.52%, 50=0.03% 00:19:36.186 cpu : usr=73.42%, sys=21.55%, ctx=17, majf=0, minf=7 00:19:36.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:36.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:36.186 issued rwts: total=11436,11395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:36.186 00:19:36.186 Run status group 0 (all jobs): 00:19:36.186 READ: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.8MB), run=2010-2010msec 00:19:36.186 WRITE: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=44.5MiB (46.7MB), run=2010-2010msec 00:19:36.186 01:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:36.186 01:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:36.186 01:41:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:36.444 01:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:36.702 01:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:37.268 01:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:37.268 01:41:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:38.203 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:38.203 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:38.203 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:38.203 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.204 rmmod nvme_tcp 00:19:38.204 rmmod nvme_fabrics 00:19:38.204 rmmod nvme_keyring 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 92060 ']' 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 92060 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 92060 ']' 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 92060 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92060 00:19:38.204 killing process with pid 92060 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92060' 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 92060 00:19:38.204 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 92060 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:38.462 01:41:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:38.462 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:38.721 ************************************ 00:19:38.721 END TEST nvmf_fio_host 00:19:38.721 ************************************ 00:19:38.721 00:19:38.721 real 0m19.372s 00:19:38.721 user 1m24.833s 00:19:38.721 sys 0m4.245s 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.721 ************************************ 00:19:38.721 START TEST nvmf_failover 00:19:38.721 ************************************ 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:38.721 * Looking for test storage... 00:19:38.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:19:38.721 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.981 --rc genhtml_branch_coverage=1 00:19:38.981 --rc genhtml_function_coverage=1 00:19:38.981 --rc genhtml_legend=1 00:19:38.981 --rc geninfo_all_blocks=1 00:19:38.981 --rc geninfo_unexecuted_blocks=1 00:19:38.981 00:19:38.981 ' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.981 --rc genhtml_branch_coverage=1 00:19:38.981 --rc genhtml_function_coverage=1 00:19:38.981 --rc genhtml_legend=1 00:19:38.981 --rc geninfo_all_blocks=1 00:19:38.981 --rc geninfo_unexecuted_blocks=1 00:19:38.981 00:19:38.981 ' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.981 --rc genhtml_branch_coverage=1 00:19:38.981 --rc genhtml_function_coverage=1 00:19:38.981 --rc genhtml_legend=1 00:19:38.981 --rc geninfo_all_blocks=1 00:19:38.981 --rc geninfo_unexecuted_blocks=1 00:19:38.981 00:19:38.981 ' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:38.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.981 --rc genhtml_branch_coverage=1 00:19:38.981 --rc genhtml_function_coverage=1 00:19:38.981 --rc genhtml_legend=1 00:19:38.981 --rc geninfo_all_blocks=1 00:19:38.981 --rc geninfo_unexecuted_blocks=1 00:19:38.981 00:19:38.981 ' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.981 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:38.982 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:38.982 Cannot find device "nvmf_init_br" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:38.982 Cannot find device "nvmf_init_br2" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:38.982 Cannot find device "nvmf_tgt_br" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.982 Cannot find device "nvmf_tgt_br2" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:38.982 Cannot find device "nvmf_init_br" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:38.982 Cannot find device "nvmf_init_br2" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:38.982 Cannot find device "nvmf_tgt_br" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:38.982 Cannot find device "nvmf_tgt_br2" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:38.982 Cannot find device "nvmf_br" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:38.982 Cannot find device "nvmf_init_if" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:38.982 Cannot find device "nvmf_init_if2" 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:38.982 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:39.242 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:39.242 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:19:39.242 00:19:39.242 --- 10.0.0.3 ping statistics --- 00:19:39.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.242 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:39.242 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:39.242 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:19:39.242 00:19:39.242 --- 10.0.0.4 ping statistics --- 00:19:39.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.242 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:39.242 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:39.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:19:39.242 00:19:39.242 --- 10.0.0.1 ping statistics --- 00:19:39.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.242 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:19:39.501 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:39.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:39.501 00:19:39.501 --- 10.0.0.2 ping statistics --- 00:19:39.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.502 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=92657 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 92657 00:19:39.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 92657 ']' 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.502 01:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:39.502 [2024-12-16 01:41:09.999041] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:39.502 [2024-12-16 01:41:09.999213] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.502 [2024-12-16 01:41:10.156302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.760 [2024-12-16 01:41:10.182127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.761 [2024-12-16 01:41:10.182479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.761 [2024-12-16 01:41:10.182757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.761 [2024-12-16 01:41:10.182981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.761 [2024-12-16 01:41:10.183100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.761 [2024-12-16 01:41:10.183996] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.761 [2024-12-16 01:41:10.184220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.761 [2024-12-16 01:41:10.184234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.761 [2024-12-16 01:41:10.220858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.761 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.761 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:39.761 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.761 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.761 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:39.761 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.761 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:40.019 [2024-12-16 01:41:10.596958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.019 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:40.279 Malloc0 00:19:40.279 01:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.537 01:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.796 01:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:41.055 [2024-12-16 01:41:11.564482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:41.055 01:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:41.314 [2024-12-16 01:41:11.836663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:41.314 01:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:41.573 [2024-12-16 01:41:12.064836] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=92703 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 92703 /var/tmp/bdevperf.sock 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 92703 ']' 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:41.573 01:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:42.508 01:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:42.508 01:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:42.508 01:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:42.767 NVMe0n1 00:19:42.767 01:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:43.025 00:19:43.284 01:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=92726 00:19:43.284 01:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.284 01:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:44.219 01:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:44.477 01:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:47.762 01:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:47.762 00:19:47.762 01:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:48.019 01:41:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:51.305 01:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:51.305 [2024-12-16 01:41:21.865360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:51.305 01:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:52.241 01:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:52.808 01:41:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 92726 00:19:59.378 { 00:19:59.378 "results": [ 00:19:59.378 { 00:19:59.378 "job": "NVMe0n1", 00:19:59.378 "core_mask": "0x1", 00:19:59.378 "workload": "verify", 00:19:59.378 "status": "finished", 00:19:59.378 "verify_range": { 00:19:59.378 "start": 0, 00:19:59.378 "length": 16384 00:19:59.378 }, 00:19:59.378 "queue_depth": 128, 00:19:59.378 "io_size": 4096, 00:19:59.378 "runtime": 15.007989, 00:19:59.378 "iops": 9962.960393960842, 00:19:59.378 "mibps": 38.91781403890954, 00:19:59.378 "io_failed": 3309, 00:19:59.378 "io_timeout": 0, 00:19:59.378 "avg_latency_us": 12540.54288846471, 00:19:59.378 "min_latency_us": 558.5454545454545, 00:19:59.378 "max_latency_us": 16324.421818181818 00:19:59.378 } 00:19:59.378 ], 00:19:59.378 "core_count": 1 00:19:59.378 } 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 92703 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 92703 ']' 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 92703 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92703 00:19:59.378 killing process with pid 92703 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92703' 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 92703 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 92703 00:19:59.378 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:59.378 [2024-12-16 01:41:12.131204] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:59.378 [2024-12-16 01:41:12.131303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92703 ] 00:19:59.378 [2024-12-16 01:41:12.277862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.378 [2024-12-16 01:41:12.303095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.378 [2024-12-16 01:41:12.336795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:59.378 Running I/O for 15 seconds... 00:19:59.378 7776.00 IOPS, 30.38 MiB/s [2024-12-16T01:41:30.036Z] [2024-12-16 01:41:14.973462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.378 [2024-12-16 01:41:14.973740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.973977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.973991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.974003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.974021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.974034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.974049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.378 [2024-12-16 01:41:14.974061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.378 [2024-12-16 01:41:14.974101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.974773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.974980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.974994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.379 [2024-12-16 01:41:14.975221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.975248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.975274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.975301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.379 [2024-12-16 01:41:14.975315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.379 [2024-12-16 01:41:14.975328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.975355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.975388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.975415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.975442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.975981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.975995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.976008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.976035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.976062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.976096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.380 [2024-12-16 01:41:14.976123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.380 [2024-12-16 01:41:14.976408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.380 [2024-12-16 01:41:14.976420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.381 [2024-12-16 01:41:14.976454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.381 [2024-12-16 01:41:14.976481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.381 [2024-12-16 01:41:14.976507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.381 [2024-12-16 01:41:14.976561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.381 [2024-12-16 01:41:14.976590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.976981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.976993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.977020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.381 [2024-12-16 01:41:14.977047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4480 is same with the state(6) to be set 00:19:59.381 [2024-12-16 01:41:14.977076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75192 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75520 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75528 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75536 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75544 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75552 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75560 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75568 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.381 [2024-12-16 01:41:14.977446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.381 [2024-12-16 01:41:14.977455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75576 len:8 PRP1 0x0 PRP2 0x0 00:19:59.381 [2024-12-16 01:41:14.977467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977514] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:59.381 [2024-12-16 01:41:14.977578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.381 [2024-12-16 01:41:14.977600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.381 [2024-12-16 01:41:14.977626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.381 [2024-12-16 01:41:14.977662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.381 [2024-12-16 01:41:14.977675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.382 [2024-12-16 01:41:14.977687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:14.977700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:59.382 [2024-12-16 01:41:14.977754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392480 (9): Bad file descriptor 00:19:59.382 [2024-12-16 01:41:14.981367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:59.382 [2024-12-16 01:41:15.006482] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:59.382 8795.50 IOPS, 34.36 MiB/s [2024-12-16T01:41:30.040Z] 9282.33 IOPS, 36.26 MiB/s [2024-12-16T01:41:30.040Z] 9513.25 IOPS, 37.16 MiB/s [2024-12-16T01:41:30.040Z] [2024-12-16 01:41:18.588282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.382 [2024-12-16 01:41:18.588830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.588980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.588992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.589006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.589018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.589031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.589043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.589057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.589068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.589082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.589094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.589108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.589120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.382 [2024-12-16 01:41:18.589134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.382 [2024-12-16 01:41:18.589146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:112880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.383 [2024-12-16 01:41:18.589918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.589984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.589997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:112384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.383 [2024-12-16 01:41:18.590279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.383 [2024-12-16 01:41:18.590293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.590323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:112464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.590353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.590383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.590450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:112984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.590984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.590998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:112536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:112544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.384 [2024-12-16 01:41:18.591349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.384 [2024-12-16 01:41:18.591455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.384 [2024-12-16 01:41:18.591469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:18.591481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:18.591514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:18.591585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:18.591614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.591927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.591989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.592003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.592030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.592057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.592083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.385 [2024-12-16 01:41:18.592110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.385 [2024-12-16 01:41:18.592165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.385 [2024-12-16 01:41:18.592175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:112672 len:8 PRP1 0x0 PRP2 0x0 00:19:59.385 [2024-12-16 01:41:18.592187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592234] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:59.385 [2024-12-16 01:41:18.592287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.385 [2024-12-16 01:41:18.592307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.385 [2024-12-16 01:41:18.592333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.385 [2024-12-16 01:41:18.592359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.385 [2024-12-16 01:41:18.592384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:18.592398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:59.385 [2024-12-16 01:41:18.592432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392480 (9): Bad file descriptor 00:19:59.385 [2024-12-16 01:41:18.596037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:59.385 [2024-12-16 01:41:18.620120] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:59.385 9533.40 IOPS, 37.24 MiB/s [2024-12-16T01:41:30.043Z] 9636.83 IOPS, 37.64 MiB/s [2024-12-16T01:41:30.043Z] 9708.14 IOPS, 37.92 MiB/s [2024-12-16T01:41:30.043Z] 9759.62 IOPS, 38.12 MiB/s [2024-12-16T01:41:30.043Z] 9813.00 IOPS, 38.33 MiB/s [2024-12-16T01:41:30.043Z] [2024-12-16 01:41:23.152863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.152937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.152981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.152996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.153025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.153067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.153095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.153132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.153189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.153220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.385 [2024-12-16 01:41:23.153246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.385 [2024-12-16 01:41:23.153260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.153273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.153300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.153349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.153378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.153405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.153445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.153471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.153985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.153998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.386 [2024-12-16 01:41:23.154273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.386 [2024-12-16 01:41:23.154648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.386 [2024-12-16 01:41:23.154687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.154982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.154997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.155010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.155037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.155065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.155094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.155122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.155185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.387 [2024-12-16 01:41:23.155222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.387 [2024-12-16 01:41:23.155857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.387 [2024-12-16 01:41:23.155870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.155885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.155899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.155914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.155927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.155942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.155955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.155970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.155990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:59.388 [2024-12-16 01:41:23.156493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:94448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:94456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.156985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.156999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.157012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.157027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.157040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.157054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:59.388 [2024-12-16 01:41:23.157067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.157081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c3f60 is same with the state(6) to be set 00:19:59.388 [2024-12-16 01:41:23.157098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.388 [2024-12-16 01:41:23.157108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.388 [2024-12-16 01:41:23.157118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94544 len:8 PRP1 0x0 PRP2 0x0 00:19:59.388 [2024-12-16 01:41:23.157131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.388 [2024-12-16 01:41:23.157145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.388 [2024-12-16 01:41:23.157155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95000 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.389 [2024-12-16 01:41:23.157201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95008 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.389 [2024-12-16 01:41:23.157280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95016 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.389 [2024-12-16 01:41:23.157343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95024 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.389 [2024-12-16 01:41:23.157394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95032 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.389 [2024-12-16 01:41:23.157439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.389 [2024-12-16 01:41:23.157483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:59.389 [2024-12-16 01:41:23.157554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:59.389 [2024-12-16 01:41:23.157579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95056 len:8 PRP1 0x0 PRP2 0x0 00:19:59.389 [2024-12-16 01:41:23.157593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157642] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:59.389 [2024-12-16 01:41:23.157698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.389 [2024-12-16 01:41:23.157719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.389 [2024-12-16 01:41:23.157746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.389 [2024-12-16 01:41:23.157771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:59.389 [2024-12-16 01:41:23.157809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:59.389 [2024-12-16 01:41:23.157821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:59.389 [2024-12-16 01:41:23.157869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392480 (9): Bad file descriptor 00:19:59.389 [2024-12-16 01:41:23.161611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:59.389 [2024-12-16 01:41:23.188249] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:59.389 9802.80 IOPS, 38.29 MiB/s [2024-12-16T01:41:30.047Z] 9846.18 IOPS, 38.46 MiB/s [2024-12-16T01:41:30.047Z] 9879.00 IOPS, 38.59 MiB/s [2024-12-16T01:41:30.047Z] 9909.54 IOPS, 38.71 MiB/s [2024-12-16T01:41:30.047Z] 9936.86 IOPS, 38.82 MiB/s [2024-12-16T01:41:30.047Z] 9962.13 IOPS, 38.91 MiB/s 00:19:59.389 Latency(us) 00:19:59.389 [2024-12-16T01:41:30.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.389 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:59.389 Verification LBA range: start 0x0 length 0x4000 00:19:59.389 NVMe0n1 : 15.01 9962.96 38.92 220.48 0.00 12540.54 558.55 16324.42 00:19:59.389 [2024-12-16T01:41:30.047Z] =================================================================================================================== 00:19:59.389 [2024-12-16T01:41:30.047Z] Total : 9962.96 38.92 220.48 0.00 12540.54 558.55 16324.42 00:19:59.389 Received shutdown signal, test time was about 15.000000 seconds 00:19:59.389 00:19:59.389 Latency(us) 00:19:59.389 [2024-12-16T01:41:30.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.389 [2024-12-16T01:41:30.047Z] =================================================================================================================== 00:19:59.389 [2024-12-16T01:41:30.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.389 01:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:59.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=92899 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 92899 /var/tmp/bdevperf.sock 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 92899 ']' 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:59.389 [2024-12-16 01:41:29.561451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:59.389 [2024-12-16 01:41:29.793580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:59.389 01:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:59.647 NVMe0n1 00:19:59.648 01:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:59.906 00:19:59.906 01:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:00.164 00:20:00.164 01:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:00.164 01:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:20:00.423 01:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:00.681 01:41:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:20:03.965 01:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:03.965 01:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:20:03.965 01:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=92969 00:20:03.965 01:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:03.965 01:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 92969 00:20:05.340 { 00:20:05.340 "results": [ 00:20:05.340 { 00:20:05.340 "job": "NVMe0n1", 00:20:05.340 "core_mask": "0x1", 00:20:05.340 "workload": "verify", 00:20:05.340 "status": "finished", 00:20:05.340 "verify_range": { 00:20:05.340 "start": 0, 00:20:05.340 "length": 16384 00:20:05.340 }, 00:20:05.340 "queue_depth": 128, 00:20:05.340 "io_size": 4096, 00:20:05.340 "runtime": 1.014361, 00:20:05.340 "iops": 7717.173668940348, 00:20:05.340 "mibps": 30.145209644298234, 00:20:05.340 "io_failed": 0, 00:20:05.340 "io_timeout": 0, 00:20:05.340 "avg_latency_us": 16519.068867933292, 00:20:05.340 "min_latency_us": 2100.130909090909, 00:20:05.340 "max_latency_us": 14656.232727272727 00:20:05.340 } 00:20:05.340 ], 00:20:05.340 "core_count": 1 00:20:05.340 } 00:20:05.340 01:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:05.340 [2024-12-16 01:41:29.064622] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:05.340 [2024-12-16 01:41:29.064725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92899 ] 00:20:05.340 [2024-12-16 01:41:29.205255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.340 [2024-12-16 01:41:29.224334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.340 [2024-12-16 01:41:29.254277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:05.340 [2024-12-16 01:41:31.211080] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:20:05.340 [2024-12-16 01:41:31.211202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.340 [2024-12-16 01:41:31.211226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.340 [2024-12-16 01:41:31.211243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.341 [2024-12-16 01:41:31.211256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.341 [2024-12-16 01:41:31.211268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.341 [2024-12-16 01:41:31.211281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.341 [2024-12-16 01:41:31.211294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.341 [2024-12-16 01:41:31.211305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.341 [2024-12-16 01:41:31.211318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:20:05.341 [2024-12-16 01:41:31.211364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:20:05.341 [2024-12-16 01:41:31.211394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b480 (9): Bad file descriptor 00:20:05.341 [2024-12-16 01:41:31.221577] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:20:05.341 Running I/O for 1 seconds... 00:20:05.341 7700.00 IOPS, 30.08 MiB/s 00:20:05.341 Latency(us) 00:20:05.341 [2024-12-16T01:41:35.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.341 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:05.341 Verification LBA range: start 0x0 length 0x4000 00:20:05.341 NVMe0n1 : 1.01 7717.17 30.15 0.00 0.00 16519.07 2100.13 14656.23 00:20:05.341 [2024-12-16T01:41:35.999Z] =================================================================================================================== 00:20:05.341 [2024-12-16T01:41:35.999Z] Total : 7717.17 30.15 0.00 0.00 16519.07 2100.13 14656.23 00:20:05.341 01:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.341 01:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:20:05.341 01:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:05.908 01:41:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:20:05.908 01:41:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:05.908 01:41:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:06.166 01:41:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:20:09.494 01:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:09.494 01:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:20:09.494 01:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 92899 00:20:09.494 01:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 92899 ']' 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 92899 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92899 00:20:09.494 killing process with pid 92899 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92899' 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 92899 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 92899 00:20:09.494 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:20:09.753 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.012 rmmod nvme_tcp 00:20:10.012 rmmod nvme_fabrics 00:20:10.012 rmmod nvme_keyring 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 92657 ']' 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 92657 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 92657 ']' 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 92657 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92657 00:20:10.012 killing process with pid 92657 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92657' 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 92657 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 92657 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:10.012 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:20:10.270 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:10.270 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.271 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:20:10.530 00:20:10.530 real 0m31.691s 00:20:10.530 user 2m2.022s 00:20:10.530 sys 0m5.371s 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:10.530 ************************************ 00:20:10.530 END TEST nvmf_failover 00:20:10.530 ************************************ 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.530 ************************************ 00:20:10.530 START TEST nvmf_host_discovery 00:20:10.530 ************************************ 00:20:10.530 01:41:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:10.530 * Looking for test storage... 00:20:10.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:10.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.530 --rc genhtml_branch_coverage=1 00:20:10.530 --rc genhtml_function_coverage=1 00:20:10.530 --rc genhtml_legend=1 00:20:10.530 --rc geninfo_all_blocks=1 00:20:10.530 --rc geninfo_unexecuted_blocks=1 00:20:10.530 00:20:10.530 ' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:10.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.530 --rc genhtml_branch_coverage=1 00:20:10.530 --rc genhtml_function_coverage=1 00:20:10.530 --rc genhtml_legend=1 00:20:10.530 --rc geninfo_all_blocks=1 00:20:10.530 --rc geninfo_unexecuted_blocks=1 00:20:10.530 00:20:10.530 ' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:10.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.530 --rc genhtml_branch_coverage=1 00:20:10.530 --rc genhtml_function_coverage=1 00:20:10.530 --rc genhtml_legend=1 00:20:10.530 --rc geninfo_all_blocks=1 00:20:10.530 --rc geninfo_unexecuted_blocks=1 00:20:10.530 00:20:10.530 ' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:10.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.530 --rc genhtml_branch_coverage=1 00:20:10.530 --rc genhtml_function_coverage=1 00:20:10.530 --rc genhtml_legend=1 00:20:10.530 --rc geninfo_all_blocks=1 00:20:10.530 --rc geninfo_unexecuted_blocks=1 00:20:10.530 00:20:10.530 ' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:10.530 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:10.790 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:10.790 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:10.791 Cannot find device "nvmf_init_br" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:10.791 Cannot find device "nvmf_init_br2" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:10.791 Cannot find device "nvmf_tgt_br" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:10.791 Cannot find device "nvmf_tgt_br2" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:10.791 Cannot find device "nvmf_init_br" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:10.791 Cannot find device "nvmf_init_br2" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:10.791 Cannot find device "nvmf_tgt_br" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:10.791 Cannot find device "nvmf_tgt_br2" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:10.791 Cannot find device "nvmf_br" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:10.791 Cannot find device "nvmf_init_if" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:10.791 Cannot find device "nvmf_init_if2" 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:10.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:10.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:10.791 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:11.051 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:11.051 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:11.051 00:20:11.051 --- 10.0.0.3 ping statistics --- 00:20:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.051 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:11.051 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:11.051 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:20:11.051 00:20:11.051 --- 10.0.0.4 ping statistics --- 00:20:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.051 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:11.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:11.051 00:20:11.051 --- 10.0.0.1 ping statistics --- 00:20:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.051 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:11.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:20:11.051 00:20:11.051 --- 10.0.0.2 ping statistics --- 00:20:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.051 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:20:11.051 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=93295 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 93295 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 93295 ']' 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.052 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.052 [2024-12-16 01:41:41.647469] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:11.052 [2024-12-16 01:41:41.647564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.311 [2024-12-16 01:41:41.788841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.311 [2024-12-16 01:41:41.806760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.311 [2024-12-16 01:41:41.806819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.311 [2024-12-16 01:41:41.806829] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.311 [2024-12-16 01:41:41.806835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.311 [2024-12-16 01:41:41.806841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.311 [2024-12-16 01:41:41.807125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.311 [2024-12-16 01:41:41.835940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.311 [2024-12-16 01:41:41.932571] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.311 [2024-12-16 01:41:41.940685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.311 null0 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.311 null1 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.311 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.570 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.570 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=93319 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 93319 /tmp/host.sock 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 93319 ']' 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:11.571 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.571 01:41:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:11.571 [2024-12-16 01:41:42.032548] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:11.571 [2024-12-16 01:41:42.032646] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93319 ] 00:20:11.571 [2024-12-16 01:41:42.185170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.571 [2024-12-16 01:41:42.209702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.829 [2024-12-16 01:41:42.243819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:11.829 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.088 [2024-12-16 01:41:42.632833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:20:12.088 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:20:12.089 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:12.089 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:12.089 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:12.089 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.089 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:12.089 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.089 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:12.347 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:20:12.348 01:41:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:20:12.914 [2024-12-16 01:41:43.315529] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:12.914 [2024-12-16 01:41:43.315583] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:12.914 [2024-12-16 01:41:43.315602] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:12.915 [2024-12-16 01:41:43.321569] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:12.915 [2024-12-16 01:41:43.375902] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:12.915 [2024-12-16 01:41:43.376814] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1006f00:1 started. 00:20:12.915 [2024-12-16 01:41:43.378462] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:12.915 [2024-12-16 01:41:43.378485] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:12.915 [2024-12-16 01:41:43.384075] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1006f00 was disconnected and freed. delete nvme_qpair. 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:13.482 01:41:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:13.482 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.483 [2024-12-16 01:41:44.107572] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1007280:1 started. 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:13.483 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:13.483 [2024-12-16 01:41:44.114584] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1007280 was disconnected and freed. delete nvme_qpair. 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.742 [2024-12-16 01:41:44.218008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:13.742 [2024-12-16 01:41:44.218650] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:13.742 [2024-12-16 01:41:44.218675] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:13.742 [2024-12-16 01:41:44.224650] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.742 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.743 [2024-12-16 01:41:44.289029] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:20:13.743 [2024-12-16 01:41:44.289094] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:13.743 [2024-12-16 01:41:44.289105] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:13.743 [2024-12-16 01:41:44.289110] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.743 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.003 [2024-12-16 01:41:44.451269] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:14.003 [2024-12-16 01:41:44.451313] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:14.003 [2024-12-16 01:41:44.456077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.003 [2024-12-16 01:41:44.456140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.003 [2024-12-16 01:41:44.456167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.003 [2024-12-16 01:41:44.456176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.003 [2024-12-16 01:41:44.456184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.003 [2024-12-16 01:41:44.456193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.003 [2024-12-16 01:41:44.456202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:14.003 [2024-12-16 01:41:44.456210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:14.003 [2024-12-16 01:41:44.456219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd7300 is same with the state(6) to be set 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:14.003 [2024-12-16 01:41:44.457292] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:20:14.003 [2024-12-16 01:41:44.457323] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:14.003 [2024-12-16 01:41:44.457391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd7300 (9): Bad file descriptor 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.003 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:14.262 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.263 01:41:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.639 [2024-12-16 01:41:45.876259] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:15.639 [2024-12-16 01:41:45.876285] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:15.639 [2024-12-16 01:41:45.876318] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:15.639 [2024-12-16 01:41:45.882294] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:20:15.639 [2024-12-16 01:41:45.940617] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:20:15.639 [2024-12-16 01:41:45.941267] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xfebcd0:1 started. 00:20:15.639 [2024-12-16 01:41:45.943139] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:15.639 [2024-12-16 01:41:45.943194] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:20:15.639 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.639 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:15.639 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:15.639 [2024-12-16 01:41:45.945110] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xfebcd0 was disconnected and freed. delete nvme_qpair. 00:20:15.639 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.640 request: 00:20:15.640 { 00:20:15.640 "name": "nvme", 00:20:15.640 "trtype": "tcp", 00:20:15.640 "traddr": "10.0.0.3", 00:20:15.640 "adrfam": "ipv4", 00:20:15.640 "trsvcid": "8009", 00:20:15.640 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:15.640 "wait_for_attach": true, 00:20:15.640 "method": "bdev_nvme_start_discovery", 00:20:15.640 "req_id": 1 00:20:15.640 } 00:20:15.640 Got JSON-RPC error response 00:20:15.640 response: 00:20:15.640 { 00:20:15.640 "code": -17, 00:20:15.640 "message": "File exists" 00:20:15.640 } 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:15.640 01:41:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.640 request: 00:20:15.640 { 00:20:15.640 "name": "nvme_second", 00:20:15.640 "trtype": "tcp", 00:20:15.640 "traddr": "10.0.0.3", 00:20:15.640 "adrfam": "ipv4", 00:20:15.640 "trsvcid": "8009", 00:20:15.640 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:15.640 "wait_for_attach": true, 00:20:15.640 "method": "bdev_nvme_start_discovery", 00:20:15.640 "req_id": 1 00:20:15.640 } 00:20:15.640 Got JSON-RPC error response 00:20:15.640 response: 00:20:15.640 { 00:20:15.640 "code": -17, 00:20:15.640 "message": "File exists" 00:20:15.640 } 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.640 01:41:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:16.575 [2024-12-16 01:41:47.207561] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:16.575 [2024-12-16 01:41:47.207598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcf2b0 with addr=10.0.0.3, port=8010 00:20:16.575 [2024-12-16 01:41:47.207614] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:16.575 [2024-12-16 01:41:47.207622] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:16.575 [2024-12-16 01:41:47.207629] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:17.950 [2024-12-16 01:41:48.207519] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:17.950 [2024-12-16 01:41:48.207581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfcf2b0 with addr=10.0.0.3, port=8010 00:20:17.950 [2024-12-16 01:41:48.207596] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:17.950 [2024-12-16 01:41:48.207604] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:17.950 [2024-12-16 01:41:48.207611] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:20:18.886 [2024-12-16 01:41:49.207446] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:20:18.886 request: 00:20:18.886 { 00:20:18.886 "name": "nvme_second", 00:20:18.886 "trtype": "tcp", 00:20:18.886 "traddr": "10.0.0.3", 00:20:18.886 "adrfam": "ipv4", 00:20:18.886 "trsvcid": "8010", 00:20:18.886 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:18.886 "wait_for_attach": false, 00:20:18.886 "attach_timeout_ms": 3000, 00:20:18.886 "method": "bdev_nvme_start_discovery", 00:20:18.886 "req_id": 1 00:20:18.886 } 00:20:18.886 Got JSON-RPC error response 00:20:18.886 response: 00:20:18.886 { 00:20:18.886 "code": -110, 00:20:18.886 "message": "Connection timed out" 00:20:18.886 } 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 93319 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:18.886 rmmod nvme_tcp 00:20:18.886 rmmod nvme_fabrics 00:20:18.886 rmmod nvme_keyring 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 93295 ']' 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 93295 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 93295 ']' 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 93295 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93295 00:20:18.886 killing process with pid 93295 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.886 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93295' 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 93295 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 93295 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:18.887 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:20:19.145 00:20:19.145 real 0m8.788s 00:20:19.145 user 0m16.650s 00:20:19.145 sys 0m1.995s 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.145 ************************************ 00:20:19.145 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:19.145 END TEST nvmf_host_discovery 00:20:19.145 ************************************ 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.405 ************************************ 00:20:19.405 START TEST nvmf_host_multipath_status 00:20:19.405 ************************************ 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:20:19.405 * Looking for test storage... 00:20:19.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:20:19.405 01:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:20:19.405 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:19.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.406 --rc genhtml_branch_coverage=1 00:20:19.406 --rc genhtml_function_coverage=1 00:20:19.406 --rc genhtml_legend=1 00:20:19.406 --rc geninfo_all_blocks=1 00:20:19.406 --rc geninfo_unexecuted_blocks=1 00:20:19.406 00:20:19.406 ' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:19.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.406 --rc genhtml_branch_coverage=1 00:20:19.406 --rc genhtml_function_coverage=1 00:20:19.406 --rc genhtml_legend=1 00:20:19.406 --rc geninfo_all_blocks=1 00:20:19.406 --rc geninfo_unexecuted_blocks=1 00:20:19.406 00:20:19.406 ' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:19.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.406 --rc genhtml_branch_coverage=1 00:20:19.406 --rc genhtml_function_coverage=1 00:20:19.406 --rc genhtml_legend=1 00:20:19.406 --rc geninfo_all_blocks=1 00:20:19.406 --rc geninfo_unexecuted_blocks=1 00:20:19.406 00:20:19.406 ' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:19.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.406 --rc genhtml_branch_coverage=1 00:20:19.406 --rc genhtml_function_coverage=1 00:20:19.406 --rc genhtml_legend=1 00:20:19.406 --rc geninfo_all_blocks=1 00:20:19.406 --rc geninfo_unexecuted_blocks=1 00:20:19.406 00:20:19.406 ' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.406 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:19.406 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.407 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:19.666 Cannot find device "nvmf_init_br" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:19.666 Cannot find device "nvmf_init_br2" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:19.666 Cannot find device "nvmf_tgt_br" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.666 Cannot find device "nvmf_tgt_br2" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:19.666 Cannot find device "nvmf_init_br" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:19.666 Cannot find device "nvmf_init_br2" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:19.666 Cannot find device "nvmf_tgt_br" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:19.666 Cannot find device "nvmf_tgt_br2" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:19.666 Cannot find device "nvmf_br" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:19.666 Cannot find device "nvmf_init_if" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:19.666 Cannot find device "nvmf_init_if2" 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.666 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.666 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:19.925 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:19.925 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:20:19.925 00:20:19.925 --- 10.0.0.3 ping statistics --- 00:20:19.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.925 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:19.925 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:19.925 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:20:19.925 00:20:19.925 --- 10.0.0.4 ping statistics --- 00:20:19.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.925 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:19.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:19.925 00:20:19.925 --- 10.0.0.1 ping statistics --- 00:20:19.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.925 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:19.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:19.925 00:20:19.925 --- 10.0.0.2 ping statistics --- 00:20:19.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.925 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=93807 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 93807 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 93807 ']' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.925 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 [2024-12-16 01:41:50.505552] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:19.925 [2024-12-16 01:41:50.505797] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.184 [2024-12-16 01:41:50.654123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:20.184 [2024-12-16 01:41:50.679415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.184 [2024-12-16 01:41:50.679754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.184 [2024-12-16 01:41:50.679915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.184 [2024-12-16 01:41:50.679931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.184 [2024-12-16 01:41:50.679940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.184 [2024-12-16 01:41:50.680867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.184 [2024-12-16 01:41:50.680899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.184 [2024-12-16 01:41:50.718669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=93807 00:20:20.184 01:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:20.443 [2024-12-16 01:41:51.088033] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.702 01:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:20.960 Malloc0 00:20:20.960 01:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:21.219 01:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:21.478 01:41:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:21.478 [2024-12-16 01:41:52.104922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:21.478 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:20:21.737 [2024-12-16 01:41:52.333027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=93854 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 93854 /var/tmp/bdevperf.sock 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 93854 ']' 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.737 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:21.995 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.995 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:20:21.995 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:22.254 01:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:22.821 Nvme0n1 00:20:22.821 01:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:23.080 Nvme0n1 00:20:23.080 01:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.080 01:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:20:24.982 01:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:20:24.982 01:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:25.240 01:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:25.499 01:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:20:26.435 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:20:26.435 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:26.435 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.435 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:26.693 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.693 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:26.693 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.693 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:26.952 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:26.952 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:26.952 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.952 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:27.210 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.210 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:27.210 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.210 01:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:27.469 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.469 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:27.469 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:27.469 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.729 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:27.729 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:27.729 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:27.729 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:28.005 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:28.005 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:20:28.005 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:28.278 01:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:28.537 01:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:29.473 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:29.473 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:29.473 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.473 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:29.731 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:29.731 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:29.731 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.731 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:29.990 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:29.990 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:29.990 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:29.990 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:30.250 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.250 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:30.250 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.250 01:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:30.817 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:31.076 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:31.076 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:31.076 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:31.335 01:42:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:31.594 01:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:32.531 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:32.531 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:32.531 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.531 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:32.791 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:32.791 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:32.791 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:32.791 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:33.050 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:33.050 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:33.050 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.050 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:33.618 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.618 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:33.618 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.618 01:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:33.618 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:33.618 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:33.618 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:33.618 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:34.186 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.186 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:34.186 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:34.186 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:34.186 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:34.186 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:34.186 01:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:34.445 01:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:34.703 01:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:35.639 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:35.639 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:35.898 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:35.898 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:36.157 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.157 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:36.157 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.157 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:36.416 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:36.416 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:36.416 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:36.416 01:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.674 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.674 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:36.674 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:36.674 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.932 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.932 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:36.933 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:36.933 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:36.933 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:36.933 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:36.933 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:36.933 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:37.500 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:37.500 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:37.500 01:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:37.500 01:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:37.758 01:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:38.695 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:38.695 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:38.695 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.695 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:38.953 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:38.953 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:38.953 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:38.953 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:39.212 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:39.212 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:39.212 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.212 01:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:39.470 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.470 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:39.470 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.470 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:39.729 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:39.729 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:39.729 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:39.729 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:39.988 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:39.988 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:39.988 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:39.988 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:40.555 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:40.555 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:40.556 01:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:40.556 01:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:40.814 01:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:41.755 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:41.755 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:41.755 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:41.755 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:42.014 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:42.014 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:42.014 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:42.014 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:42.273 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:42.273 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:42.273 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:42.273 01:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:42.531 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:42.531 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:42.531 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:42.532 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:42.793 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:42.793 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:42.793 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:42.793 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:43.057 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:43.057 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:43.057 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:43.057 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:43.315 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:43.315 01:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:43.574 01:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:43.574 01:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:43.833 01:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:44.092 01:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:45.028 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:45.028 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:45.028 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:45.028 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:45.286 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:45.286 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:45.286 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:45.286 01:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:45.545 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:45.545 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:45.545 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:45.545 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.112 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:46.372 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.372 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:46.372 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:46.372 01:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:46.631 01:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:46.631 01:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:46.631 01:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:46.890 01:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:47.149 01:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:48.085 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:48.085 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:48.085 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.085 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:48.344 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:48.344 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:48.344 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:48.344 01:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.603 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:48.603 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:48.603 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:48.603 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.170 01:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:49.429 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:49.429 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:49.429 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:49.429 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:49.688 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:49.688 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:49.688 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:49.947 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:50.205 01:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:51.206 01:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:51.206 01:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:51.206 01:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.206 01:42:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:51.464 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.464 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:51.464 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.464 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:51.723 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.723 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:51.723 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.723 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:51.982 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:51.982 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:51.982 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:51.982 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:52.241 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:52.241 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:52.241 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:52.241 01:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:52.498 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:52.498 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:52.498 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:52.498 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:52.757 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:52.757 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:52.757 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:53.324 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:53.324 01:42:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:54.700 01:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:54.700 01:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:54.700 01:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.700 01:42:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:54.700 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:54.700 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:54.700 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.700 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:54.957 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:54.957 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:54.957 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:54.957 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:55.216 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.216 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:55.216 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.216 01:42:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:55.475 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.475 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:55.475 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.475 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:55.733 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:55.733 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:55.733 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:55.733 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 93854 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 93854 ']' 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 93854 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93854 00:20:55.992 killing process with pid 93854 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93854' 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 93854 00:20:55.992 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 93854 00:20:55.992 { 00:20:55.992 "results": [ 00:20:55.992 { 00:20:55.992 "job": "Nvme0n1", 00:20:55.992 "core_mask": "0x4", 00:20:55.992 "workload": "verify", 00:20:55.992 "status": "terminated", 00:20:55.992 "verify_range": { 00:20:55.992 "start": 0, 00:20:55.992 "length": 16384 00:20:55.992 }, 00:20:55.992 "queue_depth": 128, 00:20:55.992 "io_size": 4096, 00:20:55.992 "runtime": 33.049501, 00:20:55.992 "iops": 9303.105665649839, 00:20:55.992 "mibps": 36.34025650644468, 00:20:55.992 "io_failed": 0, 00:20:55.992 "io_timeout": 0, 00:20:55.992 "avg_latency_us": 13727.451448035285, 00:20:55.992 "min_latency_us": 474.76363636363635, 00:20:55.992 "max_latency_us": 4057035.869090909 00:20:55.992 } 00:20:55.992 ], 00:20:55.992 "core_count": 1 00:20:55.992 } 00:20:56.260 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 93854 00:20:56.260 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:56.260 [2024-12-16 01:41:52.397866] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:56.260 [2024-12-16 01:41:52.397940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93854 ] 00:20:56.260 [2024-12-16 01:41:52.546982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.260 [2024-12-16 01:41:52.571006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.260 [2024-12-16 01:41:52.604100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.260 Running I/O for 90 seconds... 00:20:56.260 7956.00 IOPS, 31.08 MiB/s [2024-12-16T01:42:26.918Z] 7882.50 IOPS, 30.79 MiB/s [2024-12-16T01:42:26.918Z] 7857.33 IOPS, 30.69 MiB/s [2024-12-16T01:42:26.918Z] 7813.25 IOPS, 30.52 MiB/s [2024-12-16T01:42:26.918Z] 7833.60 IOPS, 30.60 MiB/s [2024-12-16T01:42:26.918Z] 8272.67 IOPS, 32.32 MiB/s [2024-12-16T01:42:26.918Z] 8560.00 IOPS, 33.44 MiB/s [2024-12-16T01:42:26.918Z] 8760.88 IOPS, 34.22 MiB/s [2024-12-16T01:42:26.918Z] 8902.56 IOPS, 34.78 MiB/s [2024-12-16T01:42:26.918Z] 9013.10 IOPS, 35.21 MiB/s [2024-12-16T01:42:26.918Z] 9113.00 IOPS, 35.60 MiB/s [2024-12-16T01:42:26.918Z] 9214.92 IOPS, 36.00 MiB/s [2024-12-16T01:42:26.918Z] 9306.08 IOPS, 36.35 MiB/s [2024-12-16T01:42:26.918Z] 9377.93 IOPS, 36.63 MiB/s [2024-12-16T01:42:26.918Z] [2024-12-16 01:42:08.080388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.260 [2024-12-16 01:42:08.080822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:56.260 [2024-12-16 01:42:08.080866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.080883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.080904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.080935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.080970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.080986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.081037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.081070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.081103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.081136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.081973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.081988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.082022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.261 [2024-12-16 01:42:08.082056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.261 [2024-12-16 01:42:08.082506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:56.261 [2024-12-16 01:42:08.082525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.082563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.082598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.082648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.082957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.082977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.082991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.262 [2024-12-16 01:42:08.083785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.083821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.083873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.083917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.083971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.083992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.084008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.084029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.084060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.084080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.084095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.084115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.262 [2024-12-16 01:42:08.084131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:56.262 [2024-12-16 01:42:08.084165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.084819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.084856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.084896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.084949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.084992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.085416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.085431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.086968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.263 [2024-12-16 01:42:08.086998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.087025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.087042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.087063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.087079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.087099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.087114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.087134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.087149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:56.263 [2024-12-16 01:42:08.087169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.263 [2024-12-16 01:42:08.087184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.087952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.087968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.264 [2024-12-16 01:42:08.088530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.088948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.088985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.264 [2024-12-16 01:42:08.089305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:56.264 [2024-12-16 01:42:08.089325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.089340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.089377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.089412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.089448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.089483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.089964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.089985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.265 [2024-12-16 01:42:08.090372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.265 [2024-12-16 01:42:08.090750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:56.265 [2024-12-16 01:42:08.090772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.090788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.090810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.090827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.090879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.090894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.090915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.090930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.090951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.090974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.090995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.091623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.091971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.091986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.266 [2024-12-16 01:42:08.092353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.092387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.092421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.266 [2024-12-16 01:42:08.092455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.266 [2024-12-16 01:42:08.092474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.092489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.104396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.104430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.104454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.104472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.104492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.104507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.104575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.104595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.104619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.104635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.104658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.104983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.105038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.105075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.105126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.105162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.105197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.105232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.105980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.105999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.106014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.106033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.106048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.106078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.106125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.106150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.106167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.106189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.106207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.106230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.106247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.108688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.108732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.108773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.267 [2024-12-16 01:42:08.108797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.108828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.108850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.108881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.108903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:56.267 [2024-12-16 01:42:08.108951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.267 [2024-12-16 01:42:08.108975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.109973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.109994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.110046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.110123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.110184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.268 [2024-12-16 01:42:08.110868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.110929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.110959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.110981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.111011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.111032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.111062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.111084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.111113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.111136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.111165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.268 [2024-12-16 01:42:08.111188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:56.268 [2024-12-16 01:42:08.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.111240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.111305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.111956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.111979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.269 [2024-12-16 01:42:08.112867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.112968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.112989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.113019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.113041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.113071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.269 [2024-12-16 01:42:08.113092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:56.269 [2024-12-16 01:42:08.113122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.113749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.113801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.113853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.113908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.113946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.113968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.270 [2024-12-16 01:42:08.114646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.114709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.114762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.114814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.114873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.114945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.114975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.114996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.115026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.115048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.115078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.115100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.115130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.115151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.115181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.115203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.115233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.115255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:56.270 [2024-12-16 01:42:08.115285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.270 [2024-12-16 01:42:08.115306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.115797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.115819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.118157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.118242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.118297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.118974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.118989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.119024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.119058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.119093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.119127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.119162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.119196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.271 [2024-12-16 01:42:08.119231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.119266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.271 [2024-12-16 01:42:08.119310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:56.271 [2024-12-16 01:42:08.119329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.119682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.119974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.119994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.272 [2024-12-16 01:42:08.120708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.272 [2024-12-16 01:42:08.120743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:56.272 [2024-12-16 01:42:08.120762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.120777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.120797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.120812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.120832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.120846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.120866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.120881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.120901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.120915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.120935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.120950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.120970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.120985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.273 [2024-12-16 01:42:08.121616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.121968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.121988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.122011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.122032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.273 [2024-12-16 01:42:08.122047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:56.273 [2024-12-16 01:42:08.122067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.274 [2024-12-16 01:42:08.122082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.274 [2024-12-16 01:42:08.122147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.274 [2024-12-16 01:42:08.122182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.274 [2024-12-16 01:42:08.122218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.122965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.122981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:08.123326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:08.123353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:56.274 9077.00 IOPS, 35.46 MiB/s [2024-12-16T01:42:26.932Z] 8509.69 IOPS, 33.24 MiB/s [2024-12-16T01:42:26.932Z] 8009.12 IOPS, 31.29 MiB/s [2024-12-16T01:42:26.932Z] 7564.17 IOPS, 29.55 MiB/s [2024-12-16T01:42:26.932Z] 7425.11 IOPS, 29.00 MiB/s [2024-12-16T01:42:26.932Z] 7566.25 IOPS, 29.56 MiB/s [2024-12-16T01:42:26.932Z] 7730.33 IOPS, 30.20 MiB/s [2024-12-16T01:42:26.932Z] 8025.00 IOPS, 31.35 MiB/s [2024-12-16T01:42:26.932Z] 8260.74 IOPS, 32.27 MiB/s [2024-12-16T01:42:26.932Z] 8473.29 IOPS, 33.10 MiB/s [2024-12-16T01:42:26.932Z] 8559.64 IOPS, 33.44 MiB/s [2024-12-16T01:42:26.932Z] 8626.38 IOPS, 33.70 MiB/s [2024-12-16T01:42:26.932Z] 8688.81 IOPS, 33.94 MiB/s [2024-12-16T01:42:26.932Z] 8853.61 IOPS, 34.58 MiB/s [2024-12-16T01:42:26.932Z] 9019.21 IOPS, 35.23 MiB/s [2024-12-16T01:42:26.932Z] 9172.23 IOPS, 35.83 MiB/s [2024-12-16T01:42:26.932Z] [2024-12-16 01:42:23.908716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.274 [2024-12-16 01:42:23.908777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.908841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.908861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.908882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.908897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.908918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.908947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.908966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.908980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.908999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.909013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.909031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.909045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.909064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.909078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.909097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.274 [2024-12-16 01:42:23.909134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.909155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.274 [2024-12-16 01:42:23.909169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:56.274 [2024-12-16 01:42:23.909188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.274 [2024-12-16 01:42:23.909202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.909915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.909975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.909991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.910047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.910082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.910159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.910193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.910227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.910262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.910296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.910316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.910331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.911880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.911909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.911934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.911949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.911969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.911983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.912003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.912029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.912050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.912064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.912083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.275 [2024-12-16 01:42:23.912097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.912116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.275 [2024-12-16 01:42:23.912130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:56.275 [2024-12-16 01:42:23.912149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.276 [2024-12-16 01:42:23.912162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:56.276 [2024-12-16 01:42:23.912182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:56.276 [2024-12-16 01:42:23.912196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:56.276 [2024-12-16 01:42:23.912216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.276 [2024-12-16 01:42:23.912230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:56.276 [2024-12-16 01:42:23.912248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.276 [2024-12-16 01:42:23.912262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:56.276 [2024-12-16 01:42:23.912281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.276 [2024-12-16 01:42:23.912295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:56.276 [2024-12-16 01:42:23.912313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.276 [2024-12-16 01:42:23.912327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:56.276 [2024-12-16 01:42:23.912347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:56.276 [2024-12-16 01:42:23.912361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:56.276 9244.87 IOPS, 36.11 MiB/s [2024-12-16T01:42:26.934Z] 9278.22 IOPS, 36.24 MiB/s [2024-12-16T01:42:26.934Z] 9306.39 IOPS, 36.35 MiB/s [2024-12-16T01:42:26.934Z] Received shutdown signal, test time was about 33.050265 seconds 00:20:56.276 00:20:56.276 Latency(us) 00:20:56.276 [2024-12-16T01:42:26.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.276 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:56.276 Verification LBA range: start 0x0 length 0x4000 00:20:56.276 Nvme0n1 : 33.05 9303.11 36.34 0.00 0.00 13727.45 474.76 4057035.87 00:20:56.276 [2024-12-16T01:42:26.934Z] =================================================================================================================== 00:20:56.276 [2024-12-16T01:42:26.934Z] Total : 9303.11 36.34 0.00 0.00 13727.45 474.76 4057035.87 00:20:56.276 01:42:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:56.535 rmmod nvme_tcp 00:20:56.535 rmmod nvme_fabrics 00:20:56.535 rmmod nvme_keyring 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 93807 ']' 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 93807 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 93807 ']' 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 93807 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.535 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93807 00:20:56.794 killing process with pid 93807 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93807' 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 93807 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 93807 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:56.794 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:57.053 00:20:57.053 real 0m37.753s 00:20:57.053 user 2m2.237s 00:20:57.053 sys 0m10.911s 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:57.053 ************************************ 00:20:57.053 END TEST nvmf_host_multipath_status 00:20:57.053 ************************************ 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.053 ************************************ 00:20:57.053 START TEST nvmf_discovery_remove_ifc 00:20:57.053 ************************************ 00:20:57.053 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:57.313 * Looking for test storage... 00:20:57.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:57.313 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:57.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.314 --rc genhtml_branch_coverage=1 00:20:57.314 --rc genhtml_function_coverage=1 00:20:57.314 --rc genhtml_legend=1 00:20:57.314 --rc geninfo_all_blocks=1 00:20:57.314 --rc geninfo_unexecuted_blocks=1 00:20:57.314 00:20:57.314 ' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:57.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.314 --rc genhtml_branch_coverage=1 00:20:57.314 --rc genhtml_function_coverage=1 00:20:57.314 --rc genhtml_legend=1 00:20:57.314 --rc geninfo_all_blocks=1 00:20:57.314 --rc geninfo_unexecuted_blocks=1 00:20:57.314 00:20:57.314 ' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:57.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.314 --rc genhtml_branch_coverage=1 00:20:57.314 --rc genhtml_function_coverage=1 00:20:57.314 --rc genhtml_legend=1 00:20:57.314 --rc geninfo_all_blocks=1 00:20:57.314 --rc geninfo_unexecuted_blocks=1 00:20:57.314 00:20:57.314 ' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:57.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:57.314 --rc genhtml_branch_coverage=1 00:20:57.314 --rc genhtml_function_coverage=1 00:20:57.314 --rc genhtml_legend=1 00:20:57.314 --rc geninfo_all_blocks=1 00:20:57.314 --rc geninfo_unexecuted_blocks=1 00:20:57.314 00:20:57.314 ' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:57.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:57.314 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:57.315 Cannot find device "nvmf_init_br" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:57.315 Cannot find device "nvmf_init_br2" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:57.315 Cannot find device "nvmf_tgt_br" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:57.315 Cannot find device "nvmf_tgt_br2" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:57.315 Cannot find device "nvmf_init_br" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:57.315 Cannot find device "nvmf_init_br2" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:57.315 Cannot find device "nvmf_tgt_br" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:57.315 Cannot find device "nvmf_tgt_br2" 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:57.315 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:57.574 Cannot find device "nvmf_br" 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:57.574 Cannot find device "nvmf_init_if" 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:57.574 Cannot find device "nvmf_init_if2" 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:57.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:57.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:57.574 01:42:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:57.574 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:57.574 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:57.574 00:20:57.574 --- 10.0.0.3 ping statistics --- 00:20:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.574 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:57.574 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:57.574 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:20:57.574 00:20:57.574 --- 10.0.0.4 ping statistics --- 00:20:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.574 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:57.574 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:57.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:57.574 00:20:57.574 --- 10.0.0.1 ping statistics --- 00:20:57.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.574 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:57.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:20:57.575 00:20:57.575 --- 10.0.0.2 ping statistics --- 00:20:57.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.575 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.575 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:57.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=94672 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 94672 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 94672 ']' 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.834 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:57.834 [2024-12-16 01:42:28.290066] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:57.834 [2024-12-16 01:42:28.290172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.834 [2024-12-16 01:42:28.428225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.834 [2024-12-16 01:42:28.447845] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.834 [2024-12-16 01:42:28.447913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.834 [2024-12-16 01:42:28.447924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.834 [2024-12-16 01:42:28.447946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.834 [2024-12-16 01:42:28.447952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.834 [2024-12-16 01:42:28.448253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.834 [2024-12-16 01:42:28.476661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.093 [2024-12-16 01:42:28.605497] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.093 [2024-12-16 01:42:28.613619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:58.093 null0 00:20:58.093 [2024-12-16 01:42:28.645521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=94697 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 94697 /tmp/host.sock 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 94697 ']' 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.093 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.093 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.093 [2024-12-16 01:42:28.731084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:20:58.093 [2024-12-16 01:42:28.731184] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94697 ] 00:20:58.352 [2024-12-16 01:42:28.884540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.352 [2024-12-16 01:42:28.909295] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.352 01:42:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:58.611 [2024-12-16 01:42:29.036278] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:58.611 01:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.611 01:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:58.611 01:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.611 01:42:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:59.548 [2024-12-16 01:42:30.078337] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:59.548 [2024-12-16 01:42:30.078373] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:59.548 [2024-12-16 01:42:30.078409] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:59.548 [2024-12-16 01:42:30.084371] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:59.548 [2024-12-16 01:42:30.138701] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:59.548 [2024-12-16 01:42:30.139589] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1c94aa0:1 started. 00:20:59.548 [2024-12-16 01:42:30.141133] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:59.548 [2024-12-16 01:42:30.141202] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:59.548 [2024-12-16 01:42:30.141227] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:59.548 [2024-12-16 01:42:30.141243] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:59.548 [2024-12-16 01:42:30.141265] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:59.548 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:59.549 [2024-12-16 01:42:30.147154] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1c94aa0 was disconnected and freed. delete nvme_qpair. 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:59.549 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:59.807 01:42:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:00.744 01:42:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:01.681 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:01.681 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:01.681 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:01.681 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:01.681 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:01.681 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.681 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:01.940 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.940 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:01.940 01:42:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:02.877 01:42:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:03.814 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:03.814 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.814 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.814 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:03.814 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:03.815 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:03.815 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:04.074 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.074 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:04.074 01:42:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:05.008 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.009 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:05.009 01:42:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:05.009 [2024-12-16 01:42:35.569225] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:21:05.009 [2024-12-16 01:42:35.569311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.009 [2024-12-16 01:42:35.569325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.009 [2024-12-16 01:42:35.569336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.009 [2024-12-16 01:42:35.569344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.009 [2024-12-16 01:42:35.569353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.009 [2024-12-16 01:42:35.569361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.009 [2024-12-16 01:42:35.569369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.009 [2024-12-16 01:42:35.569377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.009 [2024-12-16 01:42:35.569386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:05.009 [2024-12-16 01:42:35.569394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:05.009 [2024-12-16 01:42:35.569403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c681c0 is same with the state(6) to be set 00:21:05.009 [2024-12-16 01:42:35.579222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c681c0 (9): Bad file descriptor 00:21:05.009 [2024-12-16 01:42:35.589235] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:05.009 [2024-12-16 01:42:35.589272] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:05.009 [2024-12-16 01:42:35.589278] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:05.009 [2024-12-16 01:42:35.589299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:05.009 [2024-12-16 01:42:35.589349] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:05.944 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:05.944 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:05.944 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:05.944 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.944 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:05.944 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:05.944 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:06.203 [2024-12-16 01:42:36.649615] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:21:06.203 [2024-12-16 01:42:36.649704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c681c0 with addr=10.0.0.3, port=4420 00:21:06.203 [2024-12-16 01:42:36.649726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c681c0 is same with the state(6) to be set 00:21:06.203 [2024-12-16 01:42:36.649765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c681c0 (9): Bad file descriptor 00:21:06.203 [2024-12-16 01:42:36.650379] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:21:06.203 [2024-12-16 01:42:36.650457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:06.203 [2024-12-16 01:42:36.650477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:06.203 [2024-12-16 01:42:36.650500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:06.203 [2024-12-16 01:42:36.650517] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:06.203 [2024-12-16 01:42:36.650557] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:06.203 [2024-12-16 01:42:36.650568] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:06.203 [2024-12-16 01:42:36.650594] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:06.203 [2024-12-16 01:42:36.650606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:06.203 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.203 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:21:06.203 01:42:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:07.144 [2024-12-16 01:42:37.650649] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:07.144 [2024-12-16 01:42:37.650696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:07.144 [2024-12-16 01:42:37.650718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:07.144 [2024-12-16 01:42:37.650743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:07.144 [2024-12-16 01:42:37.650751] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:21:07.144 [2024-12-16 01:42:37.650760] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:07.144 [2024-12-16 01:42:37.650765] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:07.144 [2024-12-16 01:42:37.650770] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:07.144 [2024-12-16 01:42:37.650797] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:21:07.144 [2024-12-16 01:42:37.650830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.144 [2024-12-16 01:42:37.650844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.144 [2024-12-16 01:42:37.650856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.144 [2024-12-16 01:42:37.650863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.144 [2024-12-16 01:42:37.650872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.144 [2024-12-16 01:42:37.650879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.144 [2024-12-16 01:42:37.650887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.144 [2024-12-16 01:42:37.650895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.144 [2024-12-16 01:42:37.650903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:07.144 [2024-12-16 01:42:37.650910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:07.144 [2024-12-16 01:42:37.650918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:21:07.144 [2024-12-16 01:42:37.650983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c5e820 (9): Bad file descriptor 00:21:07.144 [2024-12-16 01:42:37.651945] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:21:07.144 [2024-12-16 01:42:37.651983] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:07.144 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.403 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:07.403 01:42:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:21:08.339 01:42:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:21:09.275 [2024-12-16 01:42:39.662173] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:09.275 [2024-12-16 01:42:39.662200] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:09.275 [2024-12-16 01:42:39.662217] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:09.275 [2024-12-16 01:42:39.668204] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:21:09.275 [2024-12-16 01:42:39.722539] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:21:09.275 [2024-12-16 01:42:39.723242] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1c658d0:1 started. 00:21:09.275 [2024-12-16 01:42:39.724348] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:21:09.275 [2024-12-16 01:42:39.724404] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:21:09.275 [2024-12-16 01:42:39.724425] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:21:09.275 [2024-12-16 01:42:39.724440] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:21:09.275 [2024-12-16 01:42:39.724448] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:09.275 [2024-12-16 01:42:39.730998] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1c658d0 was disconnected and freed. delete nvme_qpair. 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 94697 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 94697 ']' 00:21:09.275 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 94697 00:21:09.276 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:09.276 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.276 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94697 00:21:09.534 killing process with pid 94697 00:21:09.534 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.534 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.534 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94697' 00:21:09.534 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 94697 00:21:09.534 01:42:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 94697 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.534 rmmod nvme_tcp 00:21:09.534 rmmod nvme_fabrics 00:21:09.534 rmmod nvme_keyring 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 94672 ']' 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 94672 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 94672 ']' 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 94672 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.534 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94672 00:21:09.791 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.791 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94672' 00:21:09.792 killing process with pid 94672 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 94672 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 94672 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:09.792 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:21:10.050 00:21:10.050 real 0m12.938s 00:21:10.050 user 0m22.273s 00:21:10.050 sys 0m2.218s 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:21:10.050 ************************************ 00:21:10.050 END TEST nvmf_discovery_remove_ifc 00:21:10.050 ************************************ 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.050 ************************************ 00:21:10.050 START TEST nvmf_identify_kernel_target 00:21:10.050 ************************************ 00:21:10.050 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:21:10.310 * Looking for test storage... 00:21:10.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.310 --rc genhtml_branch_coverage=1 00:21:10.310 --rc genhtml_function_coverage=1 00:21:10.310 --rc genhtml_legend=1 00:21:10.310 --rc geninfo_all_blocks=1 00:21:10.310 --rc geninfo_unexecuted_blocks=1 00:21:10.310 00:21:10.310 ' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.310 --rc genhtml_branch_coverage=1 00:21:10.310 --rc genhtml_function_coverage=1 00:21:10.310 --rc genhtml_legend=1 00:21:10.310 --rc geninfo_all_blocks=1 00:21:10.310 --rc geninfo_unexecuted_blocks=1 00:21:10.310 00:21:10.310 ' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.310 --rc genhtml_branch_coverage=1 00:21:10.310 --rc genhtml_function_coverage=1 00:21:10.310 --rc genhtml_legend=1 00:21:10.310 --rc geninfo_all_blocks=1 00:21:10.310 --rc geninfo_unexecuted_blocks=1 00:21:10.310 00:21:10.310 ' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:10.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:10.310 --rc genhtml_branch_coverage=1 00:21:10.310 --rc genhtml_function_coverage=1 00:21:10.310 --rc genhtml_legend=1 00:21:10.310 --rc geninfo_all_blocks=1 00:21:10.310 --rc geninfo_unexecuted_blocks=1 00:21:10.310 00:21:10.310 ' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.310 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.311 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:10.311 Cannot find device "nvmf_init_br" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:10.311 Cannot find device "nvmf_init_br2" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:10.311 Cannot find device "nvmf_tgt_br" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.311 Cannot find device "nvmf_tgt_br2" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:10.311 Cannot find device "nvmf_init_br" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:10.311 Cannot find device "nvmf_init_br2" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:10.311 Cannot find device "nvmf_tgt_br" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:10.311 Cannot find device "nvmf_tgt_br2" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:10.311 Cannot find device "nvmf_br" 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:21:10.311 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:10.570 Cannot find device "nvmf_init_if" 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:10.570 Cannot find device "nvmf_init_if2" 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:10.570 01:42:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:10.570 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:10.570 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:10.570 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:10.570 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:10.570 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:10.571 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:10.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:10.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:21:10.830 00:21:10.830 --- 10.0.0.3 ping statistics --- 00:21:10.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.830 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:10.830 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:10.830 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:21:10.830 00:21:10.830 --- 10.0.0.4 ping statistics --- 00:21:10.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.830 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:10.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:21:10.830 00:21:10.830 --- 10.0.0.1 ping statistics --- 00:21:10.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.830 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:10.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:21:10.830 00:21:10.830 --- 10.0.0.2 ping statistics --- 00:21:10.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.830 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:10.830 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:11.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.089 Waiting for block devices as requested 00:21:11.089 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.348 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:11.348 No valid GPT data, bailing 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:11.348 01:42:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:11.607 No valid GPT data, bailing 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:11.607 No valid GPT data, bailing 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:11.607 No valid GPT data, bailing 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:11.607 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -a 10.0.0.1 -t tcp -s 4420 00:21:11.869 00:21:11.869 Discovery Log Number of Records 2, Generation counter 2 00:21:11.869 =====Discovery Log Entry 0====== 00:21:11.869 trtype: tcp 00:21:11.869 adrfam: ipv4 00:21:11.869 subtype: current discovery subsystem 00:21:11.869 treq: not specified, sq flow control disable supported 00:21:11.869 portid: 1 00:21:11.869 trsvcid: 4420 00:21:11.869 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:11.869 traddr: 10.0.0.1 00:21:11.869 eflags: none 00:21:11.869 sectype: none 00:21:11.869 =====Discovery Log Entry 1====== 00:21:11.869 trtype: tcp 00:21:11.869 adrfam: ipv4 00:21:11.869 subtype: nvme subsystem 00:21:11.869 treq: not specified, sq flow control disable supported 00:21:11.869 portid: 1 00:21:11.869 trsvcid: 4420 00:21:11.869 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:11.869 traddr: 10.0.0.1 00:21:11.869 eflags: none 00:21:11.869 sectype: none 00:21:11.869 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:21:11.869 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:21:11.869 ===================================================== 00:21:11.869 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:11.869 ===================================================== 00:21:11.869 Controller Capabilities/Features 00:21:11.869 ================================ 00:21:11.869 Vendor ID: 0000 00:21:11.869 Subsystem Vendor ID: 0000 00:21:11.869 Serial Number: 4a028e249da9cfed0a38 00:21:11.869 Model Number: Linux 00:21:11.869 Firmware Version: 6.8.9-20 00:21:11.869 Recommended Arb Burst: 0 00:21:11.869 IEEE OUI Identifier: 00 00 00 00:21:11.869 Multi-path I/O 00:21:11.869 May have multiple subsystem ports: No 00:21:11.869 May have multiple controllers: No 00:21:11.869 Associated with SR-IOV VF: No 00:21:11.869 Max Data Transfer Size: Unlimited 00:21:11.869 Max Number of Namespaces: 0 00:21:11.869 Max Number of I/O Queues: 1024 00:21:11.869 NVMe Specification Version (VS): 1.3 00:21:11.869 NVMe Specification Version (Identify): 1.3 00:21:11.869 Maximum Queue Entries: 1024 00:21:11.869 Contiguous Queues Required: No 00:21:11.869 Arbitration Mechanisms Supported 00:21:11.869 Weighted Round Robin: Not Supported 00:21:11.869 Vendor Specific: Not Supported 00:21:11.869 Reset Timeout: 7500 ms 00:21:11.869 Doorbell Stride: 4 bytes 00:21:11.869 NVM Subsystem Reset: Not Supported 00:21:11.869 Command Sets Supported 00:21:11.869 NVM Command Set: Supported 00:21:11.869 Boot Partition: Not Supported 00:21:11.869 Memory Page Size Minimum: 4096 bytes 00:21:11.869 Memory Page Size Maximum: 4096 bytes 00:21:11.869 Persistent Memory Region: Not Supported 00:21:11.869 Optional Asynchronous Events Supported 00:21:11.869 Namespace Attribute Notices: Not Supported 00:21:11.869 Firmware Activation Notices: Not Supported 00:21:11.869 ANA Change Notices: Not Supported 00:21:11.869 PLE Aggregate Log Change Notices: Not Supported 00:21:11.869 LBA Status Info Alert Notices: Not Supported 00:21:11.869 EGE Aggregate Log Change Notices: Not Supported 00:21:11.869 Normal NVM Subsystem Shutdown event: Not Supported 00:21:11.869 Zone Descriptor Change Notices: Not Supported 00:21:11.869 Discovery Log Change Notices: Supported 00:21:11.869 Controller Attributes 00:21:11.869 128-bit Host Identifier: Not Supported 00:21:11.869 Non-Operational Permissive Mode: Not Supported 00:21:11.869 NVM Sets: Not Supported 00:21:11.869 Read Recovery Levels: Not Supported 00:21:11.869 Endurance Groups: Not Supported 00:21:11.869 Predictable Latency Mode: Not Supported 00:21:11.869 Traffic Based Keep ALive: Not Supported 00:21:11.869 Namespace Granularity: Not Supported 00:21:11.869 SQ Associations: Not Supported 00:21:11.869 UUID List: Not Supported 00:21:11.869 Multi-Domain Subsystem: Not Supported 00:21:11.869 Fixed Capacity Management: Not Supported 00:21:11.869 Variable Capacity Management: Not Supported 00:21:11.869 Delete Endurance Group: Not Supported 00:21:11.869 Delete NVM Set: Not Supported 00:21:11.869 Extended LBA Formats Supported: Not Supported 00:21:11.869 Flexible Data Placement Supported: Not Supported 00:21:11.869 00:21:11.869 Controller Memory Buffer Support 00:21:11.869 ================================ 00:21:11.869 Supported: No 00:21:11.869 00:21:11.869 Persistent Memory Region Support 00:21:11.869 ================================ 00:21:11.869 Supported: No 00:21:11.869 00:21:11.869 Admin Command Set Attributes 00:21:11.869 ============================ 00:21:11.869 Security Send/Receive: Not Supported 00:21:11.869 Format NVM: Not Supported 00:21:11.869 Firmware Activate/Download: Not Supported 00:21:11.869 Namespace Management: Not Supported 00:21:11.869 Device Self-Test: Not Supported 00:21:11.869 Directives: Not Supported 00:21:11.869 NVMe-MI: Not Supported 00:21:11.869 Virtualization Management: Not Supported 00:21:11.869 Doorbell Buffer Config: Not Supported 00:21:11.869 Get LBA Status Capability: Not Supported 00:21:11.869 Command & Feature Lockdown Capability: Not Supported 00:21:11.869 Abort Command Limit: 1 00:21:11.869 Async Event Request Limit: 1 00:21:11.869 Number of Firmware Slots: N/A 00:21:11.869 Firmware Slot 1 Read-Only: N/A 00:21:11.869 Firmware Activation Without Reset: N/A 00:21:11.869 Multiple Update Detection Support: N/A 00:21:11.869 Firmware Update Granularity: No Information Provided 00:21:11.869 Per-Namespace SMART Log: No 00:21:11.869 Asymmetric Namespace Access Log Page: Not Supported 00:21:11.869 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:11.869 Command Effects Log Page: Not Supported 00:21:11.869 Get Log Page Extended Data: Supported 00:21:11.869 Telemetry Log Pages: Not Supported 00:21:11.869 Persistent Event Log Pages: Not Supported 00:21:11.869 Supported Log Pages Log Page: May Support 00:21:11.869 Commands Supported & Effects Log Page: Not Supported 00:21:11.869 Feature Identifiers & Effects Log Page:May Support 00:21:11.869 NVMe-MI Commands & Effects Log Page: May Support 00:21:11.869 Data Area 4 for Telemetry Log: Not Supported 00:21:11.869 Error Log Page Entries Supported: 1 00:21:11.869 Keep Alive: Not Supported 00:21:11.869 00:21:11.870 NVM Command Set Attributes 00:21:11.870 ========================== 00:21:11.870 Submission Queue Entry Size 00:21:11.870 Max: 1 00:21:11.870 Min: 1 00:21:11.870 Completion Queue Entry Size 00:21:11.870 Max: 1 00:21:11.870 Min: 1 00:21:11.870 Number of Namespaces: 0 00:21:11.870 Compare Command: Not Supported 00:21:11.870 Write Uncorrectable Command: Not Supported 00:21:11.870 Dataset Management Command: Not Supported 00:21:11.870 Write Zeroes Command: Not Supported 00:21:11.870 Set Features Save Field: Not Supported 00:21:11.870 Reservations: Not Supported 00:21:11.870 Timestamp: Not Supported 00:21:11.870 Copy: Not Supported 00:21:11.870 Volatile Write Cache: Not Present 00:21:11.870 Atomic Write Unit (Normal): 1 00:21:11.870 Atomic Write Unit (PFail): 1 00:21:11.870 Atomic Compare & Write Unit: 1 00:21:11.870 Fused Compare & Write: Not Supported 00:21:11.870 Scatter-Gather List 00:21:11.870 SGL Command Set: Supported 00:21:11.870 SGL Keyed: Not Supported 00:21:11.870 SGL Bit Bucket Descriptor: Not Supported 00:21:11.870 SGL Metadata Pointer: Not Supported 00:21:11.870 Oversized SGL: Not Supported 00:21:11.870 SGL Metadata Address: Not Supported 00:21:11.870 SGL Offset: Supported 00:21:11.870 Transport SGL Data Block: Not Supported 00:21:11.870 Replay Protected Memory Block: Not Supported 00:21:11.870 00:21:11.870 Firmware Slot Information 00:21:11.870 ========================= 00:21:11.870 Active slot: 0 00:21:11.870 00:21:11.870 00:21:11.870 Error Log 00:21:11.870 ========= 00:21:11.870 00:21:11.870 Active Namespaces 00:21:11.870 ================= 00:21:11.870 Discovery Log Page 00:21:11.870 ================== 00:21:11.870 Generation Counter: 2 00:21:11.870 Number of Records: 2 00:21:11.870 Record Format: 0 00:21:11.870 00:21:11.870 Discovery Log Entry 0 00:21:11.870 ---------------------- 00:21:11.870 Transport Type: 3 (TCP) 00:21:11.870 Address Family: 1 (IPv4) 00:21:11.870 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:11.870 Entry Flags: 00:21:11.870 Duplicate Returned Information: 0 00:21:11.870 Explicit Persistent Connection Support for Discovery: 0 00:21:11.870 Transport Requirements: 00:21:11.870 Secure Channel: Not Specified 00:21:11.870 Port ID: 1 (0x0001) 00:21:11.870 Controller ID: 65535 (0xffff) 00:21:11.870 Admin Max SQ Size: 32 00:21:11.870 Transport Service Identifier: 4420 00:21:11.870 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:11.870 Transport Address: 10.0.0.1 00:21:11.870 Discovery Log Entry 1 00:21:11.870 ---------------------- 00:21:11.870 Transport Type: 3 (TCP) 00:21:11.870 Address Family: 1 (IPv4) 00:21:11.870 Subsystem Type: 2 (NVM Subsystem) 00:21:11.870 Entry Flags: 00:21:11.870 Duplicate Returned Information: 0 00:21:11.870 Explicit Persistent Connection Support for Discovery: 0 00:21:11.870 Transport Requirements: 00:21:11.870 Secure Channel: Not Specified 00:21:11.870 Port ID: 1 (0x0001) 00:21:11.870 Controller ID: 65535 (0xffff) 00:21:11.870 Admin Max SQ Size: 32 00:21:11.870 Transport Service Identifier: 4420 00:21:11.870 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:21:11.870 Transport Address: 10.0.0.1 00:21:11.870 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:12.154 get_feature(0x01) failed 00:21:12.154 get_feature(0x02) failed 00:21:12.154 get_feature(0x04) failed 00:21:12.154 ===================================================== 00:21:12.154 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:12.154 ===================================================== 00:21:12.154 Controller Capabilities/Features 00:21:12.154 ================================ 00:21:12.154 Vendor ID: 0000 00:21:12.154 Subsystem Vendor ID: 0000 00:21:12.154 Serial Number: 3227b72ed767bb473022 00:21:12.154 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:21:12.154 Firmware Version: 6.8.9-20 00:21:12.154 Recommended Arb Burst: 6 00:21:12.154 IEEE OUI Identifier: 00 00 00 00:21:12.154 Multi-path I/O 00:21:12.154 May have multiple subsystem ports: Yes 00:21:12.154 May have multiple controllers: Yes 00:21:12.154 Associated with SR-IOV VF: No 00:21:12.154 Max Data Transfer Size: Unlimited 00:21:12.154 Max Number of Namespaces: 1024 00:21:12.154 Max Number of I/O Queues: 128 00:21:12.154 NVMe Specification Version (VS): 1.3 00:21:12.154 NVMe Specification Version (Identify): 1.3 00:21:12.154 Maximum Queue Entries: 1024 00:21:12.154 Contiguous Queues Required: No 00:21:12.154 Arbitration Mechanisms Supported 00:21:12.154 Weighted Round Robin: Not Supported 00:21:12.154 Vendor Specific: Not Supported 00:21:12.154 Reset Timeout: 7500 ms 00:21:12.154 Doorbell Stride: 4 bytes 00:21:12.154 NVM Subsystem Reset: Not Supported 00:21:12.154 Command Sets Supported 00:21:12.154 NVM Command Set: Supported 00:21:12.154 Boot Partition: Not Supported 00:21:12.154 Memory Page Size Minimum: 4096 bytes 00:21:12.154 Memory Page Size Maximum: 4096 bytes 00:21:12.154 Persistent Memory Region: Not Supported 00:21:12.154 Optional Asynchronous Events Supported 00:21:12.154 Namespace Attribute Notices: Supported 00:21:12.154 Firmware Activation Notices: Not Supported 00:21:12.154 ANA Change Notices: Supported 00:21:12.154 PLE Aggregate Log Change Notices: Not Supported 00:21:12.154 LBA Status Info Alert Notices: Not Supported 00:21:12.154 EGE Aggregate Log Change Notices: Not Supported 00:21:12.154 Normal NVM Subsystem Shutdown event: Not Supported 00:21:12.154 Zone Descriptor Change Notices: Not Supported 00:21:12.154 Discovery Log Change Notices: Not Supported 00:21:12.154 Controller Attributes 00:21:12.154 128-bit Host Identifier: Supported 00:21:12.154 Non-Operational Permissive Mode: Not Supported 00:21:12.154 NVM Sets: Not Supported 00:21:12.154 Read Recovery Levels: Not Supported 00:21:12.154 Endurance Groups: Not Supported 00:21:12.154 Predictable Latency Mode: Not Supported 00:21:12.154 Traffic Based Keep ALive: Supported 00:21:12.154 Namespace Granularity: Not Supported 00:21:12.154 SQ Associations: Not Supported 00:21:12.154 UUID List: Not Supported 00:21:12.154 Multi-Domain Subsystem: Not Supported 00:21:12.154 Fixed Capacity Management: Not Supported 00:21:12.154 Variable Capacity Management: Not Supported 00:21:12.154 Delete Endurance Group: Not Supported 00:21:12.154 Delete NVM Set: Not Supported 00:21:12.154 Extended LBA Formats Supported: Not Supported 00:21:12.154 Flexible Data Placement Supported: Not Supported 00:21:12.154 00:21:12.154 Controller Memory Buffer Support 00:21:12.154 ================================ 00:21:12.154 Supported: No 00:21:12.154 00:21:12.154 Persistent Memory Region Support 00:21:12.154 ================================ 00:21:12.154 Supported: No 00:21:12.154 00:21:12.154 Admin Command Set Attributes 00:21:12.154 ============================ 00:21:12.154 Security Send/Receive: Not Supported 00:21:12.154 Format NVM: Not Supported 00:21:12.154 Firmware Activate/Download: Not Supported 00:21:12.154 Namespace Management: Not Supported 00:21:12.154 Device Self-Test: Not Supported 00:21:12.154 Directives: Not Supported 00:21:12.154 NVMe-MI: Not Supported 00:21:12.154 Virtualization Management: Not Supported 00:21:12.154 Doorbell Buffer Config: Not Supported 00:21:12.154 Get LBA Status Capability: Not Supported 00:21:12.154 Command & Feature Lockdown Capability: Not Supported 00:21:12.154 Abort Command Limit: 4 00:21:12.154 Async Event Request Limit: 4 00:21:12.154 Number of Firmware Slots: N/A 00:21:12.154 Firmware Slot 1 Read-Only: N/A 00:21:12.154 Firmware Activation Without Reset: N/A 00:21:12.154 Multiple Update Detection Support: N/A 00:21:12.154 Firmware Update Granularity: No Information Provided 00:21:12.154 Per-Namespace SMART Log: Yes 00:21:12.154 Asymmetric Namespace Access Log Page: Supported 00:21:12.154 ANA Transition Time : 10 sec 00:21:12.154 00:21:12.154 Asymmetric Namespace Access Capabilities 00:21:12.154 ANA Optimized State : Supported 00:21:12.154 ANA Non-Optimized State : Supported 00:21:12.154 ANA Inaccessible State : Supported 00:21:12.154 ANA Persistent Loss State : Supported 00:21:12.154 ANA Change State : Supported 00:21:12.154 ANAGRPID is not changed : No 00:21:12.154 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:21:12.154 00:21:12.154 ANA Group Identifier Maximum : 128 00:21:12.154 Number of ANA Group Identifiers : 128 00:21:12.154 Max Number of Allowed Namespaces : 1024 00:21:12.154 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:21:12.154 Command Effects Log Page: Supported 00:21:12.154 Get Log Page Extended Data: Supported 00:21:12.154 Telemetry Log Pages: Not Supported 00:21:12.154 Persistent Event Log Pages: Not Supported 00:21:12.154 Supported Log Pages Log Page: May Support 00:21:12.154 Commands Supported & Effects Log Page: Not Supported 00:21:12.154 Feature Identifiers & Effects Log Page:May Support 00:21:12.154 NVMe-MI Commands & Effects Log Page: May Support 00:21:12.154 Data Area 4 for Telemetry Log: Not Supported 00:21:12.154 Error Log Page Entries Supported: 128 00:21:12.154 Keep Alive: Supported 00:21:12.154 Keep Alive Granularity: 1000 ms 00:21:12.154 00:21:12.154 NVM Command Set Attributes 00:21:12.154 ========================== 00:21:12.154 Submission Queue Entry Size 00:21:12.154 Max: 64 00:21:12.154 Min: 64 00:21:12.154 Completion Queue Entry Size 00:21:12.155 Max: 16 00:21:12.155 Min: 16 00:21:12.155 Number of Namespaces: 1024 00:21:12.155 Compare Command: Not Supported 00:21:12.155 Write Uncorrectable Command: Not Supported 00:21:12.155 Dataset Management Command: Supported 00:21:12.155 Write Zeroes Command: Supported 00:21:12.155 Set Features Save Field: Not Supported 00:21:12.155 Reservations: Not Supported 00:21:12.155 Timestamp: Not Supported 00:21:12.155 Copy: Not Supported 00:21:12.155 Volatile Write Cache: Present 00:21:12.155 Atomic Write Unit (Normal): 1 00:21:12.155 Atomic Write Unit (PFail): 1 00:21:12.155 Atomic Compare & Write Unit: 1 00:21:12.155 Fused Compare & Write: Not Supported 00:21:12.155 Scatter-Gather List 00:21:12.155 SGL Command Set: Supported 00:21:12.155 SGL Keyed: Not Supported 00:21:12.155 SGL Bit Bucket Descriptor: Not Supported 00:21:12.155 SGL Metadata Pointer: Not Supported 00:21:12.155 Oversized SGL: Not Supported 00:21:12.155 SGL Metadata Address: Not Supported 00:21:12.155 SGL Offset: Supported 00:21:12.155 Transport SGL Data Block: Not Supported 00:21:12.155 Replay Protected Memory Block: Not Supported 00:21:12.155 00:21:12.155 Firmware Slot Information 00:21:12.155 ========================= 00:21:12.155 Active slot: 0 00:21:12.155 00:21:12.155 Asymmetric Namespace Access 00:21:12.155 =========================== 00:21:12.155 Change Count : 0 00:21:12.155 Number of ANA Group Descriptors : 1 00:21:12.155 ANA Group Descriptor : 0 00:21:12.155 ANA Group ID : 1 00:21:12.155 Number of NSID Values : 1 00:21:12.155 Change Count : 0 00:21:12.155 ANA State : 1 00:21:12.155 Namespace Identifier : 1 00:21:12.155 00:21:12.155 Commands Supported and Effects 00:21:12.155 ============================== 00:21:12.155 Admin Commands 00:21:12.155 -------------- 00:21:12.155 Get Log Page (02h): Supported 00:21:12.155 Identify (06h): Supported 00:21:12.155 Abort (08h): Supported 00:21:12.155 Set Features (09h): Supported 00:21:12.155 Get Features (0Ah): Supported 00:21:12.155 Asynchronous Event Request (0Ch): Supported 00:21:12.155 Keep Alive (18h): Supported 00:21:12.155 I/O Commands 00:21:12.155 ------------ 00:21:12.155 Flush (00h): Supported 00:21:12.155 Write (01h): Supported LBA-Change 00:21:12.155 Read (02h): Supported 00:21:12.155 Write Zeroes (08h): Supported LBA-Change 00:21:12.155 Dataset Management (09h): Supported 00:21:12.155 00:21:12.155 Error Log 00:21:12.155 ========= 00:21:12.155 Entry: 0 00:21:12.155 Error Count: 0x3 00:21:12.155 Submission Queue Id: 0x0 00:21:12.155 Command Id: 0x5 00:21:12.155 Phase Bit: 0 00:21:12.155 Status Code: 0x2 00:21:12.155 Status Code Type: 0x0 00:21:12.155 Do Not Retry: 1 00:21:12.155 Error Location: 0x28 00:21:12.155 LBA: 0x0 00:21:12.155 Namespace: 0x0 00:21:12.155 Vendor Log Page: 0x0 00:21:12.155 ----------- 00:21:12.155 Entry: 1 00:21:12.155 Error Count: 0x2 00:21:12.155 Submission Queue Id: 0x0 00:21:12.155 Command Id: 0x5 00:21:12.155 Phase Bit: 0 00:21:12.155 Status Code: 0x2 00:21:12.155 Status Code Type: 0x0 00:21:12.155 Do Not Retry: 1 00:21:12.155 Error Location: 0x28 00:21:12.155 LBA: 0x0 00:21:12.155 Namespace: 0x0 00:21:12.155 Vendor Log Page: 0x0 00:21:12.155 ----------- 00:21:12.155 Entry: 2 00:21:12.155 Error Count: 0x1 00:21:12.155 Submission Queue Id: 0x0 00:21:12.155 Command Id: 0x4 00:21:12.155 Phase Bit: 0 00:21:12.155 Status Code: 0x2 00:21:12.155 Status Code Type: 0x0 00:21:12.155 Do Not Retry: 1 00:21:12.155 Error Location: 0x28 00:21:12.155 LBA: 0x0 00:21:12.155 Namespace: 0x0 00:21:12.155 Vendor Log Page: 0x0 00:21:12.155 00:21:12.155 Number of Queues 00:21:12.155 ================ 00:21:12.155 Number of I/O Submission Queues: 128 00:21:12.155 Number of I/O Completion Queues: 128 00:21:12.155 00:21:12.155 ZNS Specific Controller Data 00:21:12.155 ============================ 00:21:12.155 Zone Append Size Limit: 0 00:21:12.155 00:21:12.155 00:21:12.155 Active Namespaces 00:21:12.155 ================= 00:21:12.155 get_feature(0x05) failed 00:21:12.155 Namespace ID:1 00:21:12.155 Command Set Identifier: NVM (00h) 00:21:12.155 Deallocate: Supported 00:21:12.155 Deallocated/Unwritten Error: Not Supported 00:21:12.155 Deallocated Read Value: Unknown 00:21:12.155 Deallocate in Write Zeroes: Not Supported 00:21:12.155 Deallocated Guard Field: 0xFFFF 00:21:12.155 Flush: Supported 00:21:12.155 Reservation: Not Supported 00:21:12.155 Namespace Sharing Capabilities: Multiple Controllers 00:21:12.155 Size (in LBAs): 1310720 (5GiB) 00:21:12.155 Capacity (in LBAs): 1310720 (5GiB) 00:21:12.155 Utilization (in LBAs): 1310720 (5GiB) 00:21:12.155 UUID: bae995c7-fc0f-4431-ad63-66e319c99f83 00:21:12.155 Thin Provisioning: Not Supported 00:21:12.155 Per-NS Atomic Units: Yes 00:21:12.155 Atomic Boundary Size (Normal): 0 00:21:12.155 Atomic Boundary Size (PFail): 0 00:21:12.155 Atomic Boundary Offset: 0 00:21:12.155 NGUID/EUI64 Never Reused: No 00:21:12.155 ANA group ID: 1 00:21:12.155 Namespace Write Protected: No 00:21:12.155 Number of LBA Formats: 1 00:21:12.155 Current LBA Format: LBA Format #00 00:21:12.155 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:21:12.155 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.155 rmmod nvme_tcp 00:21:12.155 rmmod nvme_fabrics 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:12.155 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:12.424 01:42:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:21:12.424 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:12.424 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:12.424 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:12.424 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:12.424 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:12.424 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:12.424 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:13.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:13.361 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:13.361 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:13.361 00:21:13.361 real 0m3.347s 00:21:13.361 user 0m1.164s 00:21:13.361 sys 0m1.557s 00:21:13.361 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.361 ************************************ 00:21:13.361 01:42:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.361 END TEST nvmf_identify_kernel_target 00:21:13.361 ************************************ 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.621 ************************************ 00:21:13.621 START TEST nvmf_auth_host 00:21:13.621 ************************************ 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:21:13.621 * Looking for test storage... 00:21:13.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:13.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.621 --rc genhtml_branch_coverage=1 00:21:13.621 --rc genhtml_function_coverage=1 00:21:13.621 --rc genhtml_legend=1 00:21:13.621 --rc geninfo_all_blocks=1 00:21:13.621 --rc geninfo_unexecuted_blocks=1 00:21:13.621 00:21:13.621 ' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:13.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.621 --rc genhtml_branch_coverage=1 00:21:13.621 --rc genhtml_function_coverage=1 00:21:13.621 --rc genhtml_legend=1 00:21:13.621 --rc geninfo_all_blocks=1 00:21:13.621 --rc geninfo_unexecuted_blocks=1 00:21:13.621 00:21:13.621 ' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:13.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.621 --rc genhtml_branch_coverage=1 00:21:13.621 --rc genhtml_function_coverage=1 00:21:13.621 --rc genhtml_legend=1 00:21:13.621 --rc geninfo_all_blocks=1 00:21:13.621 --rc geninfo_unexecuted_blocks=1 00:21:13.621 00:21:13.621 ' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:13.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.621 --rc genhtml_branch_coverage=1 00:21:13.621 --rc genhtml_function_coverage=1 00:21:13.621 --rc genhtml_legend=1 00:21:13.621 --rc geninfo_all_blocks=1 00:21:13.621 --rc geninfo_unexecuted_blocks=1 00:21:13.621 00:21:13.621 ' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:21:13.621 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:21:13.622 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:13.881 Cannot find device "nvmf_init_br" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:13.881 Cannot find device "nvmf_init_br2" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:13.881 Cannot find device "nvmf_tgt_br" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.881 Cannot find device "nvmf_tgt_br2" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:13.881 Cannot find device "nvmf_init_br" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:13.881 Cannot find device "nvmf_init_br2" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:13.881 Cannot find device "nvmf_tgt_br" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:13.881 Cannot find device "nvmf_tgt_br2" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:13.881 Cannot find device "nvmf_br" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:13.881 Cannot find device "nvmf_init_if" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:13.881 Cannot find device "nvmf_init_if2" 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:13.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:13.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:13.881 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:13.882 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:13.882 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:13.882 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:13.882 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:13.882 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:13.882 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:14.141 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.141 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:14.141 00:21:14.141 --- 10.0.0.3 ping statistics --- 00:21:14.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.141 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:14.141 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:14.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:21:14.141 00:21:14.141 --- 10.0.0.4 ping statistics --- 00:21:14.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.141 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:14.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:14.141 00:21:14.141 --- 10.0.0.1 ping statistics --- 00:21:14.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.141 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:14.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:21:14.141 00:21:14.141 --- 10.0.0.2 ping statistics --- 00:21:14.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.141 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=95681 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 95681 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 95681 ']' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.141 01:42:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.401 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.401 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:14.401 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.401 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.401 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.666 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.666 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:21:14.666 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:21:14.666 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.666 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.666 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=69b06ea6ecb3500c8731451e52976b5a 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4Ww 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 69b06ea6ecb3500c8731451e52976b5a 0 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 69b06ea6ecb3500c8731451e52976b5a 0 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=69b06ea6ecb3500c8731451e52976b5a 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4Ww 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4Ww 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.4Ww 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89eafaf96fdf0369269a9b95b0af61e968c6017c0809b594254fb4cfce36cc56 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dsk 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89eafaf96fdf0369269a9b95b0af61e968c6017c0809b594254fb4cfce36cc56 3 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89eafaf96fdf0369269a9b95b0af61e968c6017c0809b594254fb4cfce36cc56 3 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89eafaf96fdf0369269a9b95b0af61e968c6017c0809b594254fb4cfce36cc56 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dsk 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dsk 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dsk 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4f0f6480b3e6f5a12912f883662b8a76235bdd0c2b808cc8 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.oqW 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4f0f6480b3e6f5a12912f883662b8a76235bdd0c2b808cc8 0 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4f0f6480b3e6f5a12912f883662b8a76235bdd0c2b808cc8 0 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4f0f6480b3e6f5a12912f883662b8a76235bdd0c2b808cc8 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.oqW 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.oqW 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.oqW 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a5d3a33afc9ed1ab99a5ee9412bbe2fa2f4bf368dedaf7f 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Z6R 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a5d3a33afc9ed1ab99a5ee9412bbe2fa2f4bf368dedaf7f 2 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a5d3a33afc9ed1ab99a5ee9412bbe2fa2f4bf368dedaf7f 2 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a5d3a33afc9ed1ab99a5ee9412bbe2fa2f4bf368dedaf7f 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:14.667 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Z6R 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Z6R 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Z6R 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0d15553e74d9cfe60de931f5da20bb63 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6x3 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0d15553e74d9cfe60de931f5da20bb63 1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0d15553e74d9cfe60de931f5da20bb63 1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0d15553e74d9cfe60de931f5da20bb63 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6x3 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6x3 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6x3 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b4ab922a91727eefb19d5788bb9503c 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Cl9 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b4ab922a91727eefb19d5788bb9503c 1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b4ab922a91727eefb19d5788bb9503c 1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b4ab922a91727eefb19d5788bb9503c 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Cl9 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Cl9 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Cl9 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=38996b73373b79fbd0c90dfa098665be3498977cdb9ca1fe 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rZi 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 38996b73373b79fbd0c90dfa098665be3498977cdb9ca1fe 2 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 38996b73373b79fbd0c90dfa098665be3498977cdb9ca1fe 2 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=38996b73373b79fbd0c90dfa098665be3498977cdb9ca1fe 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rZi 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rZi 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.rZi 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=38049681e252cafb9f21f4a7f8680e35 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.sZU 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 38049681e252cafb9f21f4a7f8680e35 0 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 38049681e252cafb9f21f4a7f8680e35 0 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=38049681e252cafb9f21f4a7f8680e35 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:21:14.925 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.sZU 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.sZU 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.sZU 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ecbea96a05e813066faead747b63f000899a7de15275c8d72dcbbef91494b07c 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3Mf 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ecbea96a05e813066faead747b63f000899a7de15275c8d72dcbbef91494b07c 3 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ecbea96a05e813066faead747b63f000899a7de15275c8d72dcbbef91494b07c 3 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ecbea96a05e813066faead747b63f000899a7de15275c8d72dcbbef91494b07c 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3Mf 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3Mf 00:21:15.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3Mf 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 95681 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 95681 ']' 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.184 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.444 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:21:15.444 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:15.444 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4Ww 00:21:15.444 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dsk ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dsk 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.oqW 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Z6R ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Z6R 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6x3 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Cl9 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cl9 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.rZi 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.sZU ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.sZU 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3Mf 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:21:15.444 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:15.445 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:15.445 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:15.445 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:21:15.445 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:21:15.445 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:21:15.703 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:15.703 01:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:15.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:15.962 Waiting for block devices as requested 00:21:15.962 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:16.220 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:16.788 No valid GPT data, bailing 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:16.788 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:16.789 No valid GPT data, bailing 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:16.789 No valid GPT data, bailing 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:21:16.789 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:17.048 No valid GPT data, bailing 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 --hostid=febd874a-f7ac-4dde-b5e1-60c80814d053 -a 10.0.0.1 -t tcp -s 4420 00:21:17.048 00:21:17.048 Discovery Log Number of Records 2, Generation counter 2 00:21:17.048 =====Discovery Log Entry 0====== 00:21:17.048 trtype: tcp 00:21:17.048 adrfam: ipv4 00:21:17.048 subtype: current discovery subsystem 00:21:17.048 treq: not specified, sq flow control disable supported 00:21:17.048 portid: 1 00:21:17.048 trsvcid: 4420 00:21:17.048 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:17.048 traddr: 10.0.0.1 00:21:17.048 eflags: none 00:21:17.048 sectype: none 00:21:17.048 =====Discovery Log Entry 1====== 00:21:17.048 trtype: tcp 00:21:17.048 adrfam: ipv4 00:21:17.048 subtype: nvme subsystem 00:21:17.048 treq: not specified, sq flow control disable supported 00:21:17.048 portid: 1 00:21:17.048 trsvcid: 4420 00:21:17.048 subnqn: nqn.2024-02.io.spdk:cnode0 00:21:17.048 traddr: 10.0.0.1 00:21:17.048 eflags: none 00:21:17.048 sectype: none 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:17.048 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.049 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.308 nvme0n1 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.308 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.568 nvme0n1 00:21:17.568 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.568 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.568 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.568 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.568 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.568 01:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.568 nvme0n1 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.568 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.828 nvme0n1 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.828 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.088 nvme0n1 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.088 nvme0n1 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.088 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.347 01:42:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.606 nvme0n1 00:21:18.606 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.607 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.607 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.607 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.607 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.607 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.866 nvme0n1 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:18.866 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.867 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.126 nvme0n1 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.126 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.386 nvme0n1 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.386 01:42:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.645 nvme0n1 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:19.645 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.213 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.472 nvme0n1 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:20.472 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.473 01:42:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.732 nvme0n1 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.732 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.992 nvme0n1 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.992 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.252 nvme0n1 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.252 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.511 nvme0n1 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:21.511 01:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.889 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.148 nvme0n1 00:21:23.148 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.148 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.148 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.148 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.148 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.148 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.407 01:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.666 nvme0n1 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:23.666 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.667 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.926 nvme0n1 00:21:23.926 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.926 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:23.926 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.926 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.926 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.185 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 nvme0n1 00:21:24.445 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.445 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.445 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.445 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.445 01:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.445 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.703 nvme0n1 00:21:24.703 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.703 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:24.703 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:24.704 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.704 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.704 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:24.962 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.963 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.531 nvme0n1 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.531 01:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:25.531 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.532 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.532 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.099 nvme0n1 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.099 01:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.666 nvme0n1 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.667 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.235 nvme0n1 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.235 01:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.803 nvme0n1 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.803 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.062 nvme0n1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.063 nvme0n1 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.063 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 nvme0n1 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.323 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.324 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.324 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:28.324 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.324 01:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.582 nvme0n1 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.582 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.583 nvme0n1 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.583 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.843 nvme0n1 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.843 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 nvme0n1 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.103 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.363 nvme0n1 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.363 01:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.363 nvme0n1 00:21:29.363 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.363 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.363 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.363 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.363 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.623 nvme0n1 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.623 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.883 nvme0n1 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.883 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.143 nvme0n1 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.143 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.403 nvme0n1 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.403 01:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.403 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.663 nvme0n1 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.663 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.922 nvme0n1 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:30.922 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:30.923 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:31.181 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.181 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.181 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.441 nvme0n1 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.441 01:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.701 nvme0n1 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.701 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.269 nvme0n1 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:32.269 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.270 01:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.529 nvme0n1 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.529 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.788 nvme0n1 00:21:32.788 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.788 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:32.788 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.788 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:32.788 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:32.788 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:33.059 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.060 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 nvme0n1 00:21:33.640 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.640 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:33.640 01:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:33.640 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.641 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.209 nvme0n1 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.209 01:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.777 nvme0n1 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:34.777 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:34.778 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:34.778 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:34.778 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:34.778 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:34.778 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:34.778 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.778 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.346 nvme0n1 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.346 01:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.914 nvme0n1 00:21:35.914 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.914 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:35.914 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:35.914 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.914 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.914 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.914 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.915 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.174 nvme0n1 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:36.174 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.175 nvme0n1 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.175 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.434 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.434 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.435 nvme0n1 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.435 01:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.435 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.694 nvme0n1 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.694 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.695 nvme0n1 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.695 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.954 nvme0n1 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:36.954 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:36.955 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:36.955 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:36.955 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:36.955 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:36.955 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:36.955 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.955 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.214 nvme0n1 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:37.214 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.215 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:37.215 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:37.215 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:37.215 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.215 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.215 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 nvme0n1 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.474 01:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.474 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.733 nvme0n1 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.733 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.734 nvme0n1 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.734 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.993 nvme0n1 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:37.993 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.252 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.252 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.252 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.253 nvme0n1 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.253 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.512 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.513 01:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.513 nvme0n1 00:21:38.513 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.513 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.513 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.513 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.513 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.513 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:38.772 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.773 nvme0n1 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:38.773 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.032 nvme0n1 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.032 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.292 01:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.551 nvme0n1 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:39.551 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.552 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.811 nvme0n1 00:21:39.811 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.811 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:39.811 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:39.811 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.811 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:39.811 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.070 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.330 nvme0n1 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.330 01:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.589 nvme0n1 00:21:40.589 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.589 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:40.589 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:40.589 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.589 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.849 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.108 nvme0n1 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjliMDZlYTZlY2IzNTAwYzg3MzE0NTFlNTI5NzZiNWFKFPPe: 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODllYWZhZjk2ZmRmMDM2OTI2OWE5Yjk1YjBhZjYxZTk2OGM2MDE3YzA4MDliNTk0MjU0ZmI0Y2ZjZTM2Y2M1NrhrVTw=: 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.108 01:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.676 nvme0n1 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.676 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.244 nvme0n1 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.244 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.502 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.503 01:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.071 nvme0n1 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Mzg5OTZiNzMzNzNiNzlmYmQwYzkwZGZhMDk4NjY1YmUzNDk4OTc3Y2RiOWNhMWZlYgCZOA==: 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzgwNDk2ODFlMjUyY2FmYjlmMjFmNGE3Zjg2ODBlMzUohSNm: 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.071 01:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.639 nvme0n1 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNiZWE5NmEwNWU4MTMwNjZmYWVhZDc0N2I2M2YwMDA4OTlhN2RlMTUyNzVjOGQ3MmRjYmJlZjkxNDk0YjA3Y5dT7mE=: 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:43.639 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:43.640 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:43.640 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.640 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.209 nvme0n1 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.209 request: 00:21:44.209 { 00:21:44.209 "name": "nvme0", 00:21:44.209 "trtype": "tcp", 00:21:44.209 "traddr": "10.0.0.1", 00:21:44.209 "adrfam": "ipv4", 00:21:44.209 "trsvcid": "4420", 00:21:44.209 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:44.209 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:44.209 "prchk_reftag": false, 00:21:44.209 "prchk_guard": false, 00:21:44.209 "hdgst": false, 00:21:44.209 "ddgst": false, 00:21:44.209 "allow_unrecognized_csi": false, 00:21:44.209 "method": "bdev_nvme_attach_controller", 00:21:44.209 "req_id": 1 00:21:44.209 } 00:21:44.209 Got JSON-RPC error response 00:21:44.209 response: 00:21:44.209 { 00:21:44.209 "code": -5, 00:21:44.209 "message": "Input/output error" 00:21:44.209 } 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.209 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.209 request: 00:21:44.209 { 00:21:44.209 "name": "nvme0", 00:21:44.209 "trtype": "tcp", 00:21:44.209 "traddr": "10.0.0.1", 00:21:44.209 "adrfam": "ipv4", 00:21:44.209 "trsvcid": "4420", 00:21:44.209 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:44.209 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:44.209 "prchk_reftag": false, 00:21:44.209 "prchk_guard": false, 00:21:44.209 "hdgst": false, 00:21:44.209 "ddgst": false, 00:21:44.209 "dhchap_key": "key2", 00:21:44.209 "allow_unrecognized_csi": false, 00:21:44.210 "method": "bdev_nvme_attach_controller", 00:21:44.210 "req_id": 1 00:21:44.210 } 00:21:44.210 Got JSON-RPC error response 00:21:44.210 response: 00:21:44.210 { 00:21:44.210 "code": -5, 00:21:44.210 "message": "Input/output error" 00:21:44.210 } 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.210 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.469 request: 00:21:44.469 { 00:21:44.469 "name": "nvme0", 00:21:44.469 "trtype": "tcp", 00:21:44.469 "traddr": "10.0.0.1", 00:21:44.469 "adrfam": "ipv4", 00:21:44.469 "trsvcid": "4420", 00:21:44.469 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:44.469 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:44.469 "prchk_reftag": false, 00:21:44.469 "prchk_guard": false, 00:21:44.469 "hdgst": false, 00:21:44.469 "ddgst": false, 00:21:44.469 "dhchap_key": "key1", 00:21:44.469 "dhchap_ctrlr_key": "ckey2", 00:21:44.469 "allow_unrecognized_csi": false, 00:21:44.469 "method": "bdev_nvme_attach_controller", 00:21:44.469 "req_id": 1 00:21:44.469 } 00:21:44.469 Got JSON-RPC error response 00:21:44.469 response: 00:21:44.469 { 00:21:44.469 "code": -5, 00:21:44.469 "message": "Input/output error" 00:21:44.469 } 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:44.469 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:44.470 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:44.470 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.470 01:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.470 nvme0n1 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.470 request: 00:21:44.470 { 00:21:44.470 "name": "nvme0", 00:21:44.470 "dhchap_key": "key1", 00:21:44.470 "dhchap_ctrlr_key": "ckey2", 00:21:44.470 "method": "bdev_nvme_set_keys", 00:21:44.470 "req_id": 1 00:21:44.470 } 00:21:44.470 Got JSON-RPC error response 00:21:44.470 response: 00:21:44.470 { 00:21:44.470 "code": -13, 00:21:44.470 "message": "Permission denied" 00:21:44.470 } 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:44.470 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.728 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.728 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:44.728 01:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGYwZjY0ODBiM2U2ZjVhMTI5MTJmODgzNjYyYjhhNzYyMzViZGQwYzJiODA4Y2M4ky7szA==: 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: ]] 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGE1ZDNhMzNhZmM5ZWQxYWI5OWE1ZWU5NDEyYmJlMmZhMmY0YmYzNjhkZWRhZjdm24nL9A==: 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.665 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.925 nvme0n1 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGQxNTU1M2U3NGQ5Y2ZlNjBkZTkzMWY1ZGEyMGJiNjNfTV9f: 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: ]] 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGI0YWI5MjJhOTE3MjdlZWZiMTlkNTc4OGJiOTUwM2MdA9qH: 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.925 request: 00:21:45.925 { 00:21:45.925 "name": "nvme0", 00:21:45.925 "dhchap_key": "key2", 00:21:45.925 "dhchap_ctrlr_key": "ckey1", 00:21:45.925 "method": "bdev_nvme_set_keys", 00:21:45.925 "req_id": 1 00:21:45.925 } 00:21:45.925 Got JSON-RPC error response 00:21:45.925 response: 00:21:45.925 { 00:21:45.925 "code": -13, 00:21:45.925 "message": "Permission denied" 00:21:45.925 } 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:45.925 01:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.861 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.861 rmmod nvme_tcp 00:21:47.121 rmmod nvme_fabrics 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 95681 ']' 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 95681 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 95681 ']' 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 95681 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95681 00:21:47.121 killing process with pid 95681 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95681' 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 95681 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 95681 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:47.121 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:47.380 01:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:48.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:48.317 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:48.317 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:48.317 01:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4Ww /tmp/spdk.key-null.oqW /tmp/spdk.key-sha256.6x3 /tmp/spdk.key-sha384.rZi /tmp/spdk.key-sha512.3Mf /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:48.317 01:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:48.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:48.576 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:48.576 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:48.835 00:21:48.835 real 0m35.208s 00:21:48.835 user 0m32.510s 00:21:48.835 sys 0m3.927s 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.835 ************************************ 00:21:48.835 END TEST nvmf_auth_host 00:21:48.835 ************************************ 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.835 ************************************ 00:21:48.835 START TEST nvmf_digest 00:21:48.835 ************************************ 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:48.835 * Looking for test storage... 00:21:48.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:21:48.835 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:49.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.095 --rc genhtml_branch_coverage=1 00:21:49.095 --rc genhtml_function_coverage=1 00:21:49.095 --rc genhtml_legend=1 00:21:49.095 --rc geninfo_all_blocks=1 00:21:49.095 --rc geninfo_unexecuted_blocks=1 00:21:49.095 00:21:49.095 ' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:49.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.095 --rc genhtml_branch_coverage=1 00:21:49.095 --rc genhtml_function_coverage=1 00:21:49.095 --rc genhtml_legend=1 00:21:49.095 --rc geninfo_all_blocks=1 00:21:49.095 --rc geninfo_unexecuted_blocks=1 00:21:49.095 00:21:49.095 ' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:49.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.095 --rc genhtml_branch_coverage=1 00:21:49.095 --rc genhtml_function_coverage=1 00:21:49.095 --rc genhtml_legend=1 00:21:49.095 --rc geninfo_all_blocks=1 00:21:49.095 --rc geninfo_unexecuted_blocks=1 00:21:49.095 00:21:49.095 ' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:49.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.095 --rc genhtml_branch_coverage=1 00:21:49.095 --rc genhtml_function_coverage=1 00:21:49.095 --rc genhtml_legend=1 00:21:49.095 --rc geninfo_all_blocks=1 00:21:49.095 --rc geninfo_unexecuted_blocks=1 00:21:49.095 00:21:49.095 ' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.095 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.096 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:49.096 Cannot find device "nvmf_init_br" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:49.096 Cannot find device "nvmf_init_br2" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:49.096 Cannot find device "nvmf_tgt_br" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.096 Cannot find device "nvmf_tgt_br2" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:49.096 Cannot find device "nvmf_init_br" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:49.096 Cannot find device "nvmf_init_br2" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:49.096 Cannot find device "nvmf_tgt_br" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:49.096 Cannot find device "nvmf_tgt_br2" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:49.096 Cannot find device "nvmf_br" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:49.096 Cannot find device "nvmf_init_if" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:49.096 Cannot find device "nvmf_init_if2" 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:49.096 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:49.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:21:49.356 00:21:49.356 --- 10.0.0.3 ping statistics --- 00:21:49.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.356 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:49.356 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:49.356 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:21:49.356 00:21:49.356 --- 10.0.0.4 ping statistics --- 00:21:49.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.356 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:49.356 00:21:49.356 --- 10.0.0.1 ping statistics --- 00:21:49.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.356 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:49.356 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:49.356 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.356 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:21:49.356 00:21:49.356 --- 10.0.0.2 ping statistics --- 00:21:49.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.356 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:49.357 ************************************ 00:21:49.357 START TEST nvmf_digest_clean 00:21:49.357 ************************************ 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=97311 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 97311 00:21:49.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97311 ']' 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.357 01:43:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.357 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.357 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:49.616 [2024-12-16 01:43:20.058849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:21:49.616 [2024-12-16 01:43:20.058949] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.616 [2024-12-16 01:43:20.206044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.616 [2024-12-16 01:43:20.224778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.616 [2024-12-16 01:43:20.224993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.616 [2024-12-16 01:43:20.225071] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.616 [2024-12-16 01:43:20.225145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.616 [2024-12-16 01:43:20.225203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.616 [2024-12-16 01:43:20.225504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:49.875 [2024-12-16 01:43:20.395610] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:49.875 null0 00:21:49.875 [2024-12-16 01:43:20.428082] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.875 [2024-12-16 01:43:20.452188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97330 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97330 /var/tmp/bperf.sock 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97330 ']' 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:49.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.875 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:49.875 [2024-12-16 01:43:20.521330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:21:49.875 [2024-12-16 01:43:20.521640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97330 ] 00:21:50.134 [2024-12-16 01:43:20.677054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.134 [2024-12-16 01:43:20.702150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.134 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.134 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:50.134 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:50.134 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:50.134 01:43:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:50.393 [2024-12-16 01:43:20.994607] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:50.393 01:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:50.393 01:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:50.961 nvme0n1 00:21:50.961 01:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:50.961 01:43:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:50.961 Running I/O for 2 seconds... 00:21:53.276 17653.00 IOPS, 68.96 MiB/s [2024-12-16T01:43:23.934Z] 17600.00 IOPS, 68.75 MiB/s 00:21:53.276 Latency(us) 00:21:53.276 [2024-12-16T01:43:23.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.276 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:53.276 nvme0n1 : 2.01 17638.51 68.90 0.00 0.00 7251.90 2517.18 17635.14 00:21:53.276 [2024-12-16T01:43:23.934Z] =================================================================================================================== 00:21:53.276 [2024-12-16T01:43:23.934Z] Total : 17638.51 68.90 0.00 0.00 7251.90 2517.18 17635.14 00:21:53.276 { 00:21:53.276 "results": [ 00:21:53.276 { 00:21:53.276 "job": "nvme0n1", 00:21:53.276 "core_mask": "0x2", 00:21:53.276 "workload": "randread", 00:21:53.276 "status": "finished", 00:21:53.276 "queue_depth": 128, 00:21:53.276 "io_size": 4096, 00:21:53.276 "runtime": 2.01009, 00:21:53.276 "iops": 17638.513698391613, 00:21:53.276 "mibps": 68.90044413434224, 00:21:53.276 "io_failed": 0, 00:21:53.276 "io_timeout": 0, 00:21:53.276 "avg_latency_us": 7251.901073268291, 00:21:53.276 "min_latency_us": 2517.1781818181817, 00:21:53.276 "max_latency_us": 17635.14181818182 00:21:53.276 } 00:21:53.276 ], 00:21:53.276 "core_count": 1 00:21:53.276 } 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:53.276 | select(.opcode=="crc32c") 00:21:53.276 | "\(.module_name) \(.executed)"' 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97330 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97330 ']' 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97330 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97330 00:21:53.276 killing process with pid 97330 00:21:53.276 Received shutdown signal, test time was about 2.000000 seconds 00:21:53.276 00:21:53.276 Latency(us) 00:21:53.276 [2024-12-16T01:43:23.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.276 [2024-12-16T01:43:23.934Z] =================================================================================================================== 00:21:53.276 [2024-12-16T01:43:23.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97330' 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97330 00:21:53.276 01:43:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97330 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97383 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97383 /var/tmp/bperf.sock 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97383 ']' 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.535 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:53.535 [2024-12-16 01:43:24.052831] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:21:53.535 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:53.535 Zero copy mechanism will not be used. 00:21:53.535 [2024-12-16 01:43:24.053416] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97383 ] 00:21:53.535 [2024-12-16 01:43:24.191090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.794 [2024-12-16 01:43:24.210790] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.794 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.794 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:53.794 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:53.794 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:53.794 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:54.053 [2024-12-16 01:43:24.502099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:54.053 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:54.053 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:54.311 nvme0n1 00:21:54.311 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:54.311 01:43:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:54.311 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:54.311 Zero copy mechanism will not be used. 00:21:54.311 Running I/O for 2 seconds... 00:21:56.685 8416.00 IOPS, 1052.00 MiB/s [2024-12-16T01:43:27.343Z] 8512.00 IOPS, 1064.00 MiB/s 00:21:56.685 Latency(us) 00:21:56.685 [2024-12-16T01:43:27.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.685 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:56.685 nvme0n1 : 2.00 8509.79 1063.72 0.00 0.00 1877.40 1690.53 4021.53 00:21:56.685 [2024-12-16T01:43:27.343Z] =================================================================================================================== 00:21:56.685 [2024-12-16T01:43:27.343Z] Total : 8509.79 1063.72 0.00 0.00 1877.40 1690.53 4021.53 00:21:56.685 { 00:21:56.685 "results": [ 00:21:56.685 { 00:21:56.685 "job": "nvme0n1", 00:21:56.685 "core_mask": "0x2", 00:21:56.685 "workload": "randread", 00:21:56.685 "status": "finished", 00:21:56.685 "queue_depth": 16, 00:21:56.685 "io_size": 131072, 00:21:56.685 "runtime": 2.0024, 00:21:56.685 "iops": 8509.788254095085, 00:21:56.685 "mibps": 1063.7235317618856, 00:21:56.685 "io_failed": 0, 00:21:56.685 "io_timeout": 0, 00:21:56.685 "avg_latency_us": 1877.3973606487411, 00:21:56.685 "min_latency_us": 1690.530909090909, 00:21:56.685 "max_latency_us": 4021.5272727272727 00:21:56.685 } 00:21:56.685 ], 00:21:56.685 "core_count": 1 00:21:56.685 } 00:21:56.685 01:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:56.685 01:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:56.685 01:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:56.685 01:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:56.685 01:43:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:56.685 | select(.opcode=="crc32c") 00:21:56.685 | "\(.module_name) \(.executed)"' 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97383 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97383 ']' 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97383 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97383 00:21:56.685 killing process with pid 97383 00:21:56.685 Received shutdown signal, test time was about 2.000000 seconds 00:21:56.685 00:21:56.685 Latency(us) 00:21:56.685 [2024-12-16T01:43:27.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.685 [2024-12-16T01:43:27.343Z] =================================================================================================================== 00:21:56.685 [2024-12-16T01:43:27.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97383' 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97383 00:21:56.685 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97383 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97430 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97430 /var/tmp/bperf.sock 00:21:56.944 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97430 ']' 00:21:56.945 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:56.945 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.945 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:56.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:56.945 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.945 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:56.945 [2024-12-16 01:43:27.411970] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:21:56.945 [2024-12-16 01:43:27.412083] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97430 ] 00:21:56.945 [2024-12-16 01:43:27.557746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.945 [2024-12-16 01:43:27.576801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.204 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.204 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:57.204 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:57.204 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:57.204 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:57.462 [2024-12-16 01:43:27.935900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:57.462 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:57.462 01:43:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:57.721 nvme0n1 00:21:57.721 01:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:57.721 01:43:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:57.979 Running I/O for 2 seconds... 00:21:59.852 19178.00 IOPS, 74.91 MiB/s [2024-12-16T01:43:30.510Z] 19177.50 IOPS, 74.91 MiB/s 00:21:59.852 Latency(us) 00:21:59.852 [2024-12-16T01:43:30.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.852 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:59.852 nvme0n1 : 2.01 19198.61 74.99 0.00 0.00 6660.56 2129.92 15371.17 00:21:59.852 [2024-12-16T01:43:30.510Z] =================================================================================================================== 00:21:59.852 [2024-12-16T01:43:30.510Z] Total : 19198.61 74.99 0.00 0.00 6660.56 2129.92 15371.17 00:21:59.852 { 00:21:59.852 "results": [ 00:21:59.852 { 00:21:59.852 "job": "nvme0n1", 00:21:59.852 "core_mask": "0x2", 00:21:59.852 "workload": "randwrite", 00:21:59.852 "status": "finished", 00:21:59.852 "queue_depth": 128, 00:21:59.852 "io_size": 4096, 00:21:59.852 "runtime": 2.007072, 00:21:59.852 "iops": 19198.613701949904, 00:21:59.852 "mibps": 74.99458477324181, 00:21:59.852 "io_failed": 0, 00:21:59.852 "io_timeout": 0, 00:21:59.852 "avg_latency_us": 6660.562959635542, 00:21:59.852 "min_latency_us": 2129.92, 00:21:59.852 "max_latency_us": 15371.17090909091 00:21:59.852 } 00:21:59.852 ], 00:21:59.852 "core_count": 1 00:21:59.852 } 00:21:59.852 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:59.852 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:59.852 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:59.852 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:59.852 | select(.opcode=="crc32c") 00:21:59.852 | "\(.module_name) \(.executed)"' 00:21:59.852 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97430 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97430 ']' 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97430 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97430 00:22:00.112 killing process with pid 97430 00:22:00.112 Received shutdown signal, test time was about 2.000000 seconds 00:22:00.112 00:22:00.112 Latency(us) 00:22:00.112 [2024-12-16T01:43:30.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.112 [2024-12-16T01:43:30.770Z] =================================================================================================================== 00:22:00.112 [2024-12-16T01:43:30.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97430' 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97430 00:22:00.112 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97430 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=97478 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 97478 /var/tmp/bperf.sock 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 97478 ']' 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:00.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:00.371 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.372 01:43:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:00.372 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:00.372 Zero copy mechanism will not be used. 00:22:00.372 [2024-12-16 01:43:30.924574] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:00.372 [2024-12-16 01:43:30.924673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97478 ] 00:22:00.631 [2024-12-16 01:43:31.071124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.631 [2024-12-16 01:43:31.091001] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.199 01:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.199 01:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:22:01.199 01:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:22:01.199 01:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:22:01.199 01:43:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:01.458 [2024-12-16 01:43:32.053796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:01.458 01:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:01.458 01:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:02.026 nvme0n1 00:22:02.026 01:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:22:02.026 01:43:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:02.026 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:02.026 Zero copy mechanism will not be used. 00:22:02.026 Running I/O for 2 seconds... 00:22:03.899 7201.00 IOPS, 900.12 MiB/s [2024-12-16T01:43:34.557Z] 7200.00 IOPS, 900.00 MiB/s 00:22:03.899 Latency(us) 00:22:03.899 [2024-12-16T01:43:34.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.900 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:03.900 nvme0n1 : 2.00 7196.34 899.54 0.00 0.00 2218.34 1802.24 4855.62 00:22:03.900 [2024-12-16T01:43:34.558Z] =================================================================================================================== 00:22:03.900 [2024-12-16T01:43:34.558Z] Total : 7196.34 899.54 0.00 0.00 2218.34 1802.24 4855.62 00:22:03.900 { 00:22:03.900 "results": [ 00:22:03.900 { 00:22:03.900 "job": "nvme0n1", 00:22:03.900 "core_mask": "0x2", 00:22:03.900 "workload": "randwrite", 00:22:03.900 "status": "finished", 00:22:03.900 "queue_depth": 16, 00:22:03.900 "io_size": 131072, 00:22:03.900 "runtime": 2.003241, 00:22:03.900 "iops": 7196.33833373019, 00:22:03.900 "mibps": 899.5422917162738, 00:22:03.900 "io_failed": 0, 00:22:03.900 "io_timeout": 0, 00:22:03.900 "avg_latency_us": 2218.34448188881, 00:22:03.900 "min_latency_us": 1802.24, 00:22:03.900 "max_latency_us": 4855.6218181818185 00:22:03.900 } 00:22:03.900 ], 00:22:03.900 "core_count": 1 00:22:03.900 } 00:22:03.900 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:22:03.900 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:22:03.900 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:03.900 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:03.900 | select(.opcode=="crc32c") 00:22:03.900 | "\(.module_name) \(.executed)"' 00:22:03.900 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:04.468 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:22:04.468 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 97478 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97478 ']' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97478 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97478 00:22:04.469 killing process with pid 97478 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97478' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97478 00:22:04.469 Received shutdown signal, test time was about 2.000000 seconds 00:22:04.469 00:22:04.469 Latency(us) 00:22:04.469 [2024-12-16T01:43:35.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.469 [2024-12-16T01:43:35.127Z] =================================================================================================================== 00:22:04.469 [2024-12-16T01:43:35.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97478 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 97311 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 97311 ']' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 97311 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97311 00:22:04.469 killing process with pid 97311 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97311' 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 97311 00:22:04.469 01:43:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 97311 00:22:04.469 ************************************ 00:22:04.469 END TEST nvmf_digest_clean 00:22:04.469 ************************************ 00:22:04.469 00:22:04.469 real 0m15.123s 00:22:04.469 user 0m29.462s 00:22:04.469 sys 0m4.348s 00:22:04.469 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:04.469 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:04.728 ************************************ 00:22:04.728 START TEST nvmf_digest_error 00:22:04.728 ************************************ 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=97563 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 97563 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97563 ']' 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.728 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.728 [2024-12-16 01:43:35.236159] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:04.728 [2024-12-16 01:43:35.236255] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.728 [2024-12-16 01:43:35.383804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.986 [2024-12-16 01:43:35.403398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.986 [2024-12-16 01:43:35.403480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.986 [2024-12-16 01:43:35.403506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.986 [2024-12-16 01:43:35.403513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.986 [2024-12-16 01:43:35.403520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.986 [2024-12-16 01:43:35.403897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.986 [2024-12-16 01:43:35.528298] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:04.986 [2024-12-16 01:43:35.568332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:04.986 null0 00:22:04.986 [2024-12-16 01:43:35.600970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.986 [2024-12-16 01:43:35.625064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97587 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97587 /var/tmp/bperf.sock 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97587 ']' 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.986 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:05.244 [2024-12-16 01:43:35.680892] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:05.244 [2024-12-16 01:43:35.680988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97587 ] 00:22:05.244 [2024-12-16 01:43:35.823167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.244 [2024-12-16 01:43:35.842348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.244 [2024-12-16 01:43:35.870778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:05.503 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.503 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:05.503 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:05.503 01:43:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:05.503 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:05.503 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.503 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:05.762 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.762 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:05.762 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:06.022 nvme0n1 00:22:06.022 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:06.022 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.022 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:06.022 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.022 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:06.022 01:43:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:06.022 Running I/O for 2 seconds... 00:22:06.022 [2024-12-16 01:43:36.601962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.022 [2024-12-16 01:43:36.602019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.022 [2024-12-16 01:43:36.602032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.022 [2024-12-16 01:43:36.616391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.022 [2024-12-16 01:43:36.616439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.022 [2024-12-16 01:43:36.616450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.022 [2024-12-16 01:43:36.631253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.022 [2024-12-16 01:43:36.631300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.022 [2024-12-16 01:43:36.631312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.022 [2024-12-16 01:43:36.647737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.022 [2024-12-16 01:43:36.647787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.022 [2024-12-16 01:43:36.647801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.022 [2024-12-16 01:43:36.664366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.022 [2024-12-16 01:43:36.664413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.022 [2024-12-16 01:43:36.664424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.281 [2024-12-16 01:43:36.681502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.281 [2024-12-16 01:43:36.681577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.281 [2024-12-16 01:43:36.681591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.697350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.697397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.697409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.712638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.712686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.712698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.727506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.727562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.727573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.742673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.742718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.742729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.757961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.758008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.758019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.773427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.773473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.773485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.788460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.788506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.788517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.803604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.803649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.803660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.819015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.819062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.819074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.834258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.834306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.834317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.848957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.849002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.849014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.863233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.863281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.863307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.877786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.877830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.877841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.892123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.892168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.892179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.906377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.906423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.906434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.920657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.920703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.920714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.282 [2024-12-16 01:43:36.935120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.282 [2024-12-16 01:43:36.935165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.282 [2024-12-16 01:43:36.935175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:36.950384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:36.950431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:36.950458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:36.964692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:36.964736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:36.964747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:36.978924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:36.978968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:36.978979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:36.993129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:36.993174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:36.993184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.007486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.007532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.007568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.021703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.021749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.021760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.035982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.036028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.036039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.050914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.050960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.050971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.068054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.068118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.068129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.084451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.084496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.084507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.099346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.099390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.099401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.113653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.113699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.113710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.128151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.128197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.128208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.142375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.142421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.142433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.156538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.156582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.156592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.170776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.170820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.170831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.542 [2024-12-16 01:43:37.184981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.542 [2024-12-16 01:43:37.185025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.542 [2024-12-16 01:43:37.185035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.200087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.200132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.200143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.214806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.214850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.214861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.229123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.229169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.229179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.243344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.243388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.243399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.258030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.258075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.258086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.272653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.272697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.272708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.287418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.287463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.287474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.301559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.301604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.301615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.315877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.315921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.315932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.330101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.330186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.330197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.344279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.344324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.344335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.358563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.358616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.358627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.372741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.372785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.372796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.386987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.387031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.387041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.401122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.401166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.401177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.415415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.415460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.415470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.429576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.429620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.802 [2024-12-16 01:43:37.429631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:06.802 [2024-12-16 01:43:37.443801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:06.802 [2024-12-16 01:43:37.443846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.803 [2024-12-16 01:43:37.443857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.459014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.459061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.459073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.473667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.473728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.473739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.488117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.488161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.488171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.502400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.502446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.502487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.516778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.516823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.516849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.537054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.537100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.537111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.551297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.551342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.551353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.565447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.565492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.565502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 17079.00 IOPS, 66.71 MiB/s [2024-12-16T01:43:37.720Z] [2024-12-16 01:43:37.580017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.580042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.580054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.594236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.594283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.594295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.608624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.608670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.608681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.622926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.622971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.622982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.637115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.637160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.637171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.651412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.651457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.651469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.665514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.665569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.665581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.679639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.062 [2024-12-16 01:43:37.679683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.062 [2024-12-16 01:43:37.679694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.062 [2024-12-16 01:43:37.693798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.063 [2024-12-16 01:43:37.693843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.063 [2024-12-16 01:43:37.693853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.063 [2024-12-16 01:43:37.708357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.063 [2024-12-16 01:43:37.708402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.063 [2024-12-16 01:43:37.708413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.723820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.723865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.723876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.738063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.738115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.738144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.752396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.752443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.752454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.767008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.767053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.767064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.781168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.781213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.781224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.795437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.795482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.795493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.810200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.810246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.810257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.824774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.824819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.824830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.839807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.839853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.839864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.856307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.856355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.856366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.872839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.872885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.872911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.888332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.888378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.322 [2024-12-16 01:43:37.888389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.322 [2024-12-16 01:43:37.903212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.322 [2024-12-16 01:43:37.903258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.323 [2024-12-16 01:43:37.903269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.323 [2024-12-16 01:43:37.917983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.323 [2024-12-16 01:43:37.918030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.323 [2024-12-16 01:43:37.918041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.323 [2024-12-16 01:43:37.932836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.323 [2024-12-16 01:43:37.932866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.323 [2024-12-16 01:43:37.932878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.323 [2024-12-16 01:43:37.948053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.323 [2024-12-16 01:43:37.948083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.323 [2024-12-16 01:43:37.948094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.323 [2024-12-16 01:43:37.963244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.323 [2024-12-16 01:43:37.963291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.323 [2024-12-16 01:43:37.963303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.582 [2024-12-16 01:43:37.978743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.582 [2024-12-16 01:43:37.978789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.582 [2024-12-16 01:43:37.978801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.582 [2024-12-16 01:43:37.994185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.582 [2024-12-16 01:43:37.994233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.582 [2024-12-16 01:43:37.994244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.582 [2024-12-16 01:43:38.009395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.582 [2024-12-16 01:43:38.009440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.009451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.023822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.023869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.037976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.038020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.038031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.052325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.052369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.052380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.066627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.066671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.066682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.082592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.082631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.082642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.100016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.100060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.100071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.116121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.116166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.116177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.130855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.130900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.130911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.144971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.145015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.145027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.159173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.159217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.159227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.173514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.173569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.173581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.187690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.187735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.187745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.201819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.201849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.201859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.215973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.216017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.216027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.583 [2024-12-16 01:43:38.230155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.583 [2024-12-16 01:43:38.230186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.583 [2024-12-16 01:43:38.230197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.245568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.245612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.245623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.259893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.259924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.259935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.274051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.274095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.274106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.288237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.288282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.288293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.302532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.302586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.302597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.317065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.317109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.317120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.331803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.331848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.331859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.345969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.346014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.346025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.360246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.360290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.360301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.374829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.374861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.374872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.388962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.389006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.389017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.403258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.403302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.403312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.417405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.417451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.417461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.431474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.431519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.431530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.445499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.445553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.445566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.459690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.459734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.459744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.480172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.480217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.480228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:07.843 [2024-12-16 01:43:38.494652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:07.843 [2024-12-16 01:43:38.494698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.843 [2024-12-16 01:43:38.494724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.102 [2024-12-16 01:43:38.510061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:08.102 [2024-12-16 01:43:38.510107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.102 [2024-12-16 01:43:38.510143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.102 [2024-12-16 01:43:38.524402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:08.102 [2024-12-16 01:43:38.524448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.102 [2024-12-16 01:43:38.524459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.102 [2024-12-16 01:43:38.538719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:08.102 [2024-12-16 01:43:38.538764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.102 [2024-12-16 01:43:38.538775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.102 [2024-12-16 01:43:38.552814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:08.103 [2024-12-16 01:43:38.552859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.103 [2024-12-16 01:43:38.552870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.103 [2024-12-16 01:43:38.567230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:08.103 [2024-12-16 01:43:38.567274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.103 [2024-12-16 01:43:38.567285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.103 17205.00 IOPS, 67.21 MiB/s [2024-12-16T01:43:38.761Z] [2024-12-16 01:43:38.581866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x611390) 00:22:08.103 [2024-12-16 01:43:38.581913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:08.103 [2024-12-16 01:43:38.581924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:08.103 00:22:08.103 Latency(us) 00:22:08.103 [2024-12-16T01:43:38.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.103 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:08.103 nvme0n1 : 2.01 17217.31 67.26 0.00 0.00 7428.67 6791.91 27286.81 00:22:08.103 [2024-12-16T01:43:38.761Z] =================================================================================================================== 00:22:08.103 [2024-12-16T01:43:38.761Z] Total : 17217.31 67.26 0.00 0.00 7428.67 6791.91 27286.81 00:22:08.103 { 00:22:08.103 "results": [ 00:22:08.103 { 00:22:08.103 "job": "nvme0n1", 00:22:08.103 "core_mask": "0x2", 00:22:08.103 "workload": "randread", 00:22:08.103 "status": "finished", 00:22:08.103 "queue_depth": 128, 00:22:08.103 "io_size": 4096, 00:22:08.103 "runtime": 2.006004, 00:22:08.103 "iops": 17217.313624499253, 00:22:08.103 "mibps": 67.2551313457002, 00:22:08.103 "io_failed": 0, 00:22:08.103 "io_timeout": 0, 00:22:08.103 "avg_latency_us": 7428.674167372959, 00:22:08.103 "min_latency_us": 6791.912727272727, 00:22:08.103 "max_latency_us": 27286.807272727274 00:22:08.103 } 00:22:08.103 ], 00:22:08.103 "core_count": 1 00:22:08.103 } 00:22:08.103 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:08.103 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:08.103 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:08.103 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:08.103 | .driver_specific 00:22:08.103 | .nvme_error 00:22:08.103 | .status_code 00:22:08.103 | .command_transient_transport_error' 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97587 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97587 ']' 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97587 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97587 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97587' 00:22:08.362 killing process with pid 97587 00:22:08.362 Received shutdown signal, test time was about 2.000000 seconds 00:22:08.362 00:22:08.362 Latency(us) 00:22:08.362 [2024-12-16T01:43:39.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.362 [2024-12-16T01:43:39.020Z] =================================================================================================================== 00:22:08.362 [2024-12-16T01:43:39.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97587 00:22:08.362 01:43:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97587 00:22:08.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97633 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97633 /var/tmp/bperf.sock 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97633 ']' 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.622 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:08.622 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:08.622 Zero copy mechanism will not be used. 00:22:08.622 [2024-12-16 01:43:39.090707] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:08.622 [2024-12-16 01:43:39.090806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97633 ] 00:22:08.622 [2024-12-16 01:43:39.238966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.622 [2024-12-16 01:43:39.258042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.881 [2024-12-16 01:43:39.287277] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:08.881 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.881 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:08.881 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:08.881 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:09.140 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:09.140 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.140 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:09.140 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.140 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:09.140 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:09.400 nvme0n1 00:22:09.400 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:09.400 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.400 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:09.400 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.400 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:09.400 01:43:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:09.400 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:09.400 Zero copy mechanism will not be used. 00:22:09.400 Running I/O for 2 seconds... 00:22:09.400 [2024-12-16 01:43:39.946603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.400 [2024-12-16 01:43:39.946651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.400 [2024-12-16 01:43:39.946663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.400 [2024-12-16 01:43:39.950670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.400 [2024-12-16 01:43:39.950703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.400 [2024-12-16 01:43:39.950714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.400 [2024-12-16 01:43:39.954542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.400 [2024-12-16 01:43:39.954600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.400 [2024-12-16 01:43:39.954612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.958519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.958576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.958588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.962421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.962515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.962541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.966381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.966414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.966424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.970250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.970281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.970293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.974234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.974266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.974277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.978094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.978163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.978174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.982006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.982051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.982061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.985982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.986014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.986025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.989883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.989928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.989939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.993824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.993870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.993881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:39.997744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:39.997790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:39.997801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.002029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.002065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.002078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.006490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.006562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.006575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.010930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.010978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.010989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.015176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.015224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.015236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.019574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.019622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.019634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.024594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.024685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.024699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.029069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.029116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.029128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.033281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.033328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.033339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.037502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.037559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.037570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.041434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.041479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.041490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.045402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.045448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.045459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.049367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.049412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.049423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.401 [2024-12-16 01:43:40.053584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.401 [2024-12-16 01:43:40.053628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.401 [2024-12-16 01:43:40.053639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.057943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.057976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.057987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.062351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.062386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.062399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.066491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.066562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.066574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.070351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.070385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.070397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.074327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.074362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.074374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.078268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.078303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.078315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.082272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.082304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.082315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.086245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.086277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.086289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.090139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.090186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.662 [2024-12-16 01:43:40.090197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.662 [2024-12-16 01:43:40.094168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.662 [2024-12-16 01:43:40.094201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.094212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.098014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.098059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.098070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.101995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.102040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.102050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.105962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.106008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.106018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.109870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.109915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.109925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.113757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.113802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.113813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.117988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.118041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.118102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.122408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.122453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.122476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.126807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.126841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.126852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.131273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.131321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.131333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.136230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.136277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.136304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.140857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.140932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.145364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.145409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.145420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.149827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.149860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.149872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.154104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.154174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.154186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.158405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.158441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.158467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.162426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.162487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.162498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.166380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.166411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.166422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.170357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.170390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.170402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.174420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.174487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.174498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.178349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.178381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.178392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.182317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.182363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.182375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.186237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.186268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.186279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.190088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.190156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.190183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.194088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.194156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.194168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.198043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.198088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.198099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.202048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.202093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.202105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.206016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.663 [2024-12-16 01:43:40.206061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.663 [2024-12-16 01:43:40.206072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.663 [2024-12-16 01:43:40.210222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.210255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.210266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.214319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.214351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.214362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.218416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.218463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.218489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.222385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.222417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.222428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.226366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.226411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.226422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.230780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.230827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.230839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.235081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.235127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.235138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.239275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.239322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.239333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.243793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.243824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.243844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.248403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.248450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.248462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.253034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.253081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.253091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.257317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.257363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.257374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.261648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.261694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.261705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.265781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.265827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.265838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.270012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.270058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.270069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.274256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.274290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.274302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.278479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.278548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.278561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.282687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.282733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.282745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.286797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.286843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.286854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.290826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.290872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.290884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.295131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.295177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.295188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.299220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.299265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.299277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.303316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.303361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.303372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.307414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.307460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.307471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.311461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.311506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.311517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.664 [2024-12-16 01:43:40.315918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.664 [2024-12-16 01:43:40.315965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.664 [2024-12-16 01:43:40.315977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.925 [2024-12-16 01:43:40.320397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.925 [2024-12-16 01:43:40.320444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.925 [2024-12-16 01:43:40.320455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.925 [2024-12-16 01:43:40.324839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.925 [2024-12-16 01:43:40.324885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.925 [2024-12-16 01:43:40.324896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.925 [2024-12-16 01:43:40.329002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.329048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.329059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.333236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.333268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.333280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.337325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.337357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.337383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.341310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.341356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.341368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.345401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.345448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.345459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.349479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.349525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.349536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.353745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.353791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.353802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.357758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.357803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.357814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.361769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.361815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.361826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.365817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.365862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.365873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.369792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.369837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.369849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.373963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.374009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.374020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.377923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.377965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.377977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.382031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.382077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.382088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.386098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.386167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.386179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.390403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.390465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.390476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.394546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.394600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.394611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.398546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.398600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.398612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.402638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.402683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.402694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.406648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.406694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.406705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.410851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.410897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.410908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.414856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.414901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.414928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.419047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.419093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.419105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.423113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.423159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.423170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.427443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.427490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.427500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.431604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.926 [2024-12-16 01:43:40.431638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.926 [2024-12-16 01:43:40.431649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.926 [2024-12-16 01:43:40.435626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.435672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.435683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.439662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.439695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.439707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.443992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.444039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.444050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.448127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.448172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.448183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.452328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.452373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.452384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.456341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.456387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.456398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.460285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.460330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.460340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.464313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.464357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.464368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.468330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.468374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.468385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.472239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.472284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.472294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.476388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.476432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.476443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.480427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.480471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.480482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.484475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.484521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.484532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.488406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.488451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.488461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.492328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.492374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.492384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.496389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.496434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.496444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.500310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.500356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.500367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.504272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.504319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.504330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.508342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.508388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.508399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.512466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.512512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.512523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.516373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.516418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.516429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.520621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.520663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.520676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.525768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.525817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.525829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.531062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.531093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.531105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.536521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.536565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.536593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.541107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.541140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.541151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.545068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.545113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.545124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.549050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.927 [2024-12-16 01:43:40.549096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.927 [2024-12-16 01:43:40.549107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.927 [2024-12-16 01:43:40.553029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.928 [2024-12-16 01:43:40.553072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.928 [2024-12-16 01:43:40.553082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.928 [2024-12-16 01:43:40.556929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.928 [2024-12-16 01:43:40.556974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.928 [2024-12-16 01:43:40.556985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.928 [2024-12-16 01:43:40.560920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.928 [2024-12-16 01:43:40.560965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.928 [2024-12-16 01:43:40.560976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:09.928 [2024-12-16 01:43:40.564883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.928 [2024-12-16 01:43:40.564928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.928 [2024-12-16 01:43:40.564939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:09.928 [2024-12-16 01:43:40.568836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.928 [2024-12-16 01:43:40.568881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.928 [2024-12-16 01:43:40.568892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:09.928 [2024-12-16 01:43:40.572901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.928 [2024-12-16 01:43:40.572946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.928 [2024-12-16 01:43:40.572957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:09.928 [2024-12-16 01:43:40.577182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:09.928 [2024-12-16 01:43:40.577227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:09.928 [2024-12-16 01:43:40.577238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.581663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.581709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.581719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.585675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.585718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.585729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.589942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.589987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.589998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.593987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.594031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.594042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.597920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.597965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.597976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.601925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.601970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.601981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.606003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.606048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.606059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.610166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.610198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.610210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.614271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.614303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.614315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.618187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.618218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.618229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.622078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.622146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.622173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.626280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.626311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.626323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.630202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.630233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.630244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.633999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.634044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.634055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.637896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.637941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.637952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.641771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.641816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.641827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.645648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.645692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.645703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.649516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.649572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.649583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.653397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.653442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.653453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.657293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.657339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.657350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.661244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.661289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.661300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.189 [2024-12-16 01:43:40.665207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.189 [2024-12-16 01:43:40.665252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.189 [2024-12-16 01:43:40.665262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.669117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.669162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.669173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.673084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.673129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.673140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.677013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.677058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.677069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.680969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.681014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.681025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.684889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.684934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.684944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.688866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.688911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.688922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.692836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.692881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.692892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.696832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.696878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.696888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.700769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.700814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.700824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.704689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.704733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.704743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.708673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.708718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.708728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.712607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.712652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.712663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.716542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.716586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.716596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.720536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.720579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.720589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.724418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.724465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.724476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.728401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.728446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.728457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.732340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.732385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.732396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.736261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.736305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.736316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.740225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.740271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.740281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.744293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.744338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.744349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.748218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.748263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.748274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.752153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.752198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.752209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.756123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.756169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.756179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.760271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.760318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.760329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.764487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.764533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.764554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.768439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.768484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.768495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.772320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.772365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.190 [2024-12-16 01:43:40.772375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.190 [2024-12-16 01:43:40.776296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.190 [2024-12-16 01:43:40.776341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.776352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.780226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.780270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.780282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.784191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.784236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.784247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.788237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.788282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.788293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.792241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.792286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.792297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.796246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.796292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.796302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.800215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.800260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.800270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.804197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.804242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.804252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.808212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.808257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.808268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.812298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.812343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.812354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.816280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.816326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.816336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.820267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.820312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.820323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.824246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.824292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.824303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.828337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.828382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.828392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.832376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.832421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.832432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.836321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.836367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.836377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.191 [2024-12-16 01:43:40.840586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.191 [2024-12-16 01:43:40.840642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.191 [2024-12-16 01:43:40.840653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.844913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.844960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.844972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.849013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.849058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.849068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.853367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.853411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.853422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.857310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.857355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.857366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.861260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.861304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.861315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.865357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.865404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.865416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.869360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.869407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.869418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.873301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.873347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.873358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.877251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.877297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.877307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.881211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.881256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.881266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.885138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.885183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.885194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.889101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.889146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.889156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.893022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.893067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.893078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.897034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.897079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.897089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.900992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.901040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.901051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.452 [2024-12-16 01:43:40.904933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.452 [2024-12-16 01:43:40.904977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.452 [2024-12-16 01:43:40.904988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.908855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.908900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.908910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.912844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.912889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.912899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.916712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.916756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.916767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.920742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.920788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.920798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.924731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.924776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.924787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.928711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.928755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.928766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.932719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.932764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.932774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.936695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.936739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.936749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.940658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.940702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.940713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.453 7548.00 IOPS, 943.50 MiB/s [2024-12-16T01:43:41.111Z] [2024-12-16 01:43:40.945647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.945691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.945702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.949535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.949579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.949590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.953619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.953665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.953676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.957556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.957600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.957610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.961513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.961569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.961580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.965419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.965463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.965473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.969437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.969484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.969495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.973381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.973427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.973437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.977400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.977445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.977456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.981380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.981425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.981435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.985349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.985395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.985405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.989279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.989324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.989335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.993225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.993269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.993280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:40.997255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:40.997300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:40.997312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:41.001304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:41.001349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:41.001359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:41.005214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:41.005260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:41.005271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:41.009264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:41.009309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:41.009320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:41.013272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:41.013317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.453 [2024-12-16 01:43:41.013328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.453 [2024-12-16 01:43:41.017378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.453 [2024-12-16 01:43:41.017419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.017430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.021394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.021436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.021447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.025399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.025440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.025451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.029382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.029424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.029434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.033403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.033448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.033458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.037350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.037395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.037406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.041293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.041338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.041349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.045210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.045256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.045266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.049148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.049193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.049204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.053219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.053265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.053276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.057176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.057220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.057231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.061105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.061150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.061160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.065020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.065065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.065075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.068889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.068933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.068944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.072950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.072995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.073006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.076925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.076970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.076981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.080855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.080900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.080911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.084799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.084829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.084840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.088655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.088699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.088709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.092664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.092694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.092705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.096691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.096721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.096731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.100683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.100713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.100723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.454 [2024-12-16 01:43:41.105016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.454 [2024-12-16 01:43:41.105062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.454 [2024-12-16 01:43:41.105073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.109475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.109520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.109531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.113678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.113724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.113736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.117796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.117842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.117853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.121835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.121880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.121891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.125796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.125840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.125851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.129711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.129755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.129766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.133653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.133697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.133708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.137520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.137575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.137585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.141760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.141793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.141805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.146089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.146159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.715 [2024-12-16 01:43:41.146186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.715 [2024-12-16 01:43:41.150719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.715 [2024-12-16 01:43:41.150753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.150765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.155275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.155321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.155332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.160073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.160120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.160131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.164727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.164777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.164790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.169253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.169298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.169309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.173651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.173699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.173712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.178202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.178235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.178247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.182528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.182600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.182611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.186820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.186850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.186861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.190806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.190851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.190862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.194783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.194827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.194838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.198663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.198706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.198717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.202638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.202698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.202708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.206549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.206604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.206615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.210538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.210593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.210604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.214373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.214420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.214431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.218296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.218327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.218339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.222237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.222285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.222297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.226320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.226351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.226362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.230193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.230239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.230250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.234148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.234180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.234191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.238054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.238099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.238116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.241990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.242035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.242046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.245958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.246002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.246013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.249872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.249917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.249944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.253794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.253840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.253851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.257843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.257888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.257899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.716 [2024-12-16 01:43:41.261806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.716 [2024-12-16 01:43:41.261851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.716 [2024-12-16 01:43:41.261862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.265712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.265757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.265769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.269589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.269632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.269643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.273462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.273507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.273518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.277343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.277388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.277399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.281371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.281417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.281428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.285299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.285344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.285355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.289169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.289213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.289224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.293137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.293182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.293193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.297113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.297158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.297168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.301003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.301048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.301059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.304924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.304969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.304979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.308962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.309008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.309019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.313131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.313176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.313187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.317107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.317151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.317162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.321000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.321044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.321055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.324943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.324988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.324999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.328876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.328920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.328931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.332777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.332822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.332833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.336755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.336800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.336810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.340693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.340737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.340748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.344713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.344758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.344769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.348705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.348750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.348761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.352797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.352843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.352854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.356869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.356915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.356926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.360843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.360888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.360899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.365019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.365064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.365074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.717 [2024-12-16 01:43:41.369437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.717 [2024-12-16 01:43:41.369482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.717 [2024-12-16 01:43:41.369493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.373635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.373678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.373689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.377937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.377983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.377993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.381870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.381915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.381925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.386006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.386050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.389868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.389913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.389924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.393800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.393844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.393855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.397698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.397742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.397753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.401593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.401636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.401646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.405431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.405476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.405487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.409412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.409456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.409466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.413440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.413485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.413496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.417365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.417410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.417421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.421333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.421379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.978 [2024-12-16 01:43:41.421389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.978 [2024-12-16 01:43:41.425344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.978 [2024-12-16 01:43:41.425389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.425399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.429309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.429354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.429365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.433318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.433360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.433371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.437249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.437294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.437304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.441229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.441274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.441284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.445646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.445692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.445704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.449886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.449962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.449972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.454084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.454153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.454166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.458630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.458672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.458697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.463180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.463229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.463240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.468004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.468051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.468061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.472372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.472417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.472428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.476881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.476942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.476953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.481081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.481127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.481138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.485263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.485309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.485320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.489441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.489487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.489499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.493678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.493723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.493735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.497658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.497703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.497714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.501647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.501693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.501704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.505568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.505613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.505623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.509739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.509769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.509780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.513704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.513749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.513760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.517681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.517726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.517736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.521671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.521716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.521727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.525672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.525717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.525728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.529905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.529951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.529962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.533886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.533932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.533944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.537837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.979 [2024-12-16 01:43:41.537883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.979 [2024-12-16 01:43:41.537894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.979 [2024-12-16 01:43:41.541838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.541883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.541895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.545926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.545971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.545982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.549958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.549989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.550000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.553950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.553983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.553994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.557885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.557928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.557939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.561820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.561865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.561875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.565783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.565825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.565836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.569944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.569989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.570000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.573915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.573957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.573967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.577866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.577912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.577923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.581867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.581912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.581923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.586062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.586095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.586106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.590082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.590149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.590162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.594193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.594224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.594235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.598202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.598233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.598244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.602229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.602260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.602271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.606585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.606639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.606651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.610733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.610777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.610788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.614801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.614846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.614857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.618816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.618861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.618872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.622822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.622868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.622879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.627267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.627313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.627324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:10.980 [2024-12-16 01:43:41.631773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:10.980 [2024-12-16 01:43:41.631835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:10.980 [2024-12-16 01:43:41.631846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.636153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.636198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.636209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.640620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.640676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.640687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.644803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.644848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.644859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.648803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.648849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.648860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.652838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.652883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.652894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.656936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.656982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.656993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.661164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.661209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.661221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.665356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.665402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.665412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.669417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.669461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.669472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.673522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.673592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.673603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.677466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.677511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.677522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.681356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.681402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.681412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.685302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.685347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.685358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.689248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.689294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.689304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.693258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.693304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.693314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.697172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.697217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.241 [2024-12-16 01:43:41.697228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.241 [2024-12-16 01:43:41.700996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.241 [2024-12-16 01:43:41.701042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.701052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.704981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.705026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.705037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.708914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.708959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.708970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.712863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.712908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.712919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.716789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.716834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.716844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.720763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.720808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.720819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.724678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.724723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.724733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.728634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.728678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.728689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.732524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.732580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.732591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.736447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.736492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.736503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.740296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.740341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.740352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.744280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.744325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.744336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.748229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.748274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.748285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.752252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.752297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.752307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.756162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.756207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.756218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.760227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.760272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.760283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.764201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.764246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.764257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.768137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.768181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.768192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.772039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.772084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.772095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.776008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.776053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.776064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.779968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.780014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.780024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.783898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.783960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.783972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.787855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.787887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.787900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.791862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.791893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.791904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.795825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.795856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.795867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.799716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.799760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.799772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.803660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.803691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.803702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.807520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.242 [2024-12-16 01:43:41.807575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.242 [2024-12-16 01:43:41.807586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.242 [2024-12-16 01:43:41.811422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.811467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.811477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.815371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.815417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.815428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.819318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.819364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.819374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.823217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.823262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.823273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.827163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.827209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.827219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.831135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.831181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.831192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.835064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.835109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.835120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.839017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.839062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.842923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.842967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.842978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.846787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.846831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.846841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.850742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.850787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.850798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.854680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.854723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.854734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.858578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.858644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.862415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.862461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.862488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.866373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.866404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.866415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.870348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.870380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.870390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.874282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.874312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.874322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.878248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.878280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.878291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.882297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.882328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.882339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.886305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.886337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.886348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.890241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.890272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.890283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.243 [2024-12-16 01:43:41.894727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.243 [2024-12-16 01:43:41.894772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.243 [2024-12-16 01:43:41.894782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.899012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.899057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.899068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.903265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.903312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.903323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.907269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.907314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.907325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.911339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.911384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.911394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.915349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.915394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.915405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.919389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.919434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.919445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.923312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.923357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.923367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.927281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.927327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.927339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.931325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.931368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.931379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.935268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.935313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.935324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:11.503 [2024-12-16 01:43:41.939216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.939272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:11.503 7602.50 IOPS, 950.31 MiB/s [2024-12-16T01:43:42.161Z] [2024-12-16 01:43:41.944501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf15550) 00:22:11.503 [2024-12-16 01:43:41.944556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:11.503 [2024-12-16 01:43:41.944568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:11.503 00:22:11.503 Latency(us) 00:22:11.503 [2024-12-16T01:43:42.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.503 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:11.503 nvme0n1 : 2.00 7604.58 950.57 0.00 0.00 2100.75 1750.11 7685.59 00:22:11.503 [2024-12-16T01:43:42.161Z] =================================================================================================================== 00:22:11.503 [2024-12-16T01:43:42.161Z] Total : 7604.58 950.57 0.00 0.00 2100.75 1750.11 7685.59 00:22:11.503 { 00:22:11.503 "results": [ 00:22:11.503 { 00:22:11.503 "job": "nvme0n1", 00:22:11.503 "core_mask": "0x2", 00:22:11.503 "workload": "randread", 00:22:11.503 "status": "finished", 00:22:11.503 "queue_depth": 16, 00:22:11.503 "io_size": 131072, 00:22:11.503 "runtime": 2.003662, 00:22:11.503 "iops": 7604.576021304991, 00:22:11.503 "mibps": 950.5720026631238, 00:22:11.503 "io_failed": 0, 00:22:11.503 "io_timeout": 0, 00:22:11.503 "avg_latency_us": 2100.747482861694, 00:22:11.503 "min_latency_us": 1750.1090909090908, 00:22:11.503 "max_latency_us": 7685.585454545455 00:22:11.503 } 00:22:11.503 ], 00:22:11.503 "core_count": 1 00:22:11.503 } 00:22:11.503 01:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:11.503 01:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:11.503 01:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:11.504 01:43:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:11.504 | .driver_specific 00:22:11.504 | .nvme_error 00:22:11.504 | .status_code 00:22:11.504 | .command_transient_transport_error' 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 492 > 0 )) 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97633 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97633 ']' 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97633 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97633 00:22:11.763 killing process with pid 97633 00:22:11.763 Received shutdown signal, test time was about 2.000000 seconds 00:22:11.763 00:22:11.763 Latency(us) 00:22:11.763 [2024-12-16T01:43:42.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.763 [2024-12-16T01:43:42.421Z] =================================================================================================================== 00:22:11.763 [2024-12-16T01:43:42.421Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97633' 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97633 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97633 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97683 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97683 /var/tmp/bperf.sock 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97683 ']' 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:11.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.763 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:12.023 [2024-12-16 01:43:42.456663] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:12.023 [2024-12-16 01:43:42.456763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97683 ] 00:22:12.023 [2024-12-16 01:43:42.602342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.023 [2024-12-16 01:43:42.621430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.023 [2024-12-16 01:43:42.649445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:12.282 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.282 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:12.282 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:12.282 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:12.541 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:12.541 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.541 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:12.541 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.541 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:12.541 01:43:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:12.799 nvme0n1 00:22:12.799 01:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:12.799 01:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.799 01:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:12.799 01:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.799 01:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:12.799 01:43:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:12.799 Running I/O for 2 seconds... 00:22:12.799 [2024-12-16 01:43:43.448109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efb048 00:22:12.799 [2024-12-16 01:43:43.449482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:12.799 [2024-12-16 01:43:43.449515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:13.058 [2024-12-16 01:43:43.463213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efb8b8 00:22:13.058 [2024-12-16 01:43:43.464568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.464640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.477344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc128 00:22:13.059 [2024-12-16 01:43:43.478771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.478801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.491218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc998 00:22:13.059 [2024-12-16 01:43:43.492526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.492613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.504719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efd208 00:22:13.059 [2024-12-16 01:43:43.505973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.506002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.518342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efda78 00:22:13.059 [2024-12-16 01:43:43.519643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.519670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.531669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efe2e8 00:22:13.059 [2024-12-16 01:43:43.532908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.532967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.545068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efeb58 00:22:13.059 [2024-12-16 01:43:43.546298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.546328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.563950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efef90 00:22:13.059 [2024-12-16 01:43:43.566218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.566247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.577429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efeb58 00:22:13.059 [2024-12-16 01:43:43.579817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.579860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.590998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efe2e8 00:22:13.059 [2024-12-16 01:43:43.593330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.593373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.604755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efda78 00:22:13.059 [2024-12-16 01:43:43.606996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.607040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.618292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efd208 00:22:13.059 [2024-12-16 01:43:43.620473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.620515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.631792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc998 00:22:13.059 [2024-12-16 01:43:43.633910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.633967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.645267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc128 00:22:13.059 [2024-12-16 01:43:43.647557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.647607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.658777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efb8b8 00:22:13.059 [2024-12-16 01:43:43.660884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.660911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.672230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efb048 00:22:13.059 [2024-12-16 01:43:43.674389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.674419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.685599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efa7d8 00:22:13.059 [2024-12-16 01:43:43.687706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.687749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.699085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef9f68 00:22:13.059 [2024-12-16 01:43:43.701146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.059 [2024-12-16 01:43:43.701190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:13.059 [2024-12-16 01:43:43.713078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef96f8 00:22:13.318 [2024-12-16 01:43:43.715590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.318 [2024-12-16 01:43:43.715643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:13.318 [2024-12-16 01:43:43.727531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef8e88 00:22:13.318 [2024-12-16 01:43:43.729515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.318 [2024-12-16 01:43:43.729566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:13.318 [2024-12-16 01:43:43.740965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef8618 00:22:13.318 [2024-12-16 01:43:43.743139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.318 [2024-12-16 01:43:43.743181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:13.318 [2024-12-16 01:43:43.754554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef7da8 00:22:13.318 [2024-12-16 01:43:43.756525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.756575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.767977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef7538 00:22:13.319 [2024-12-16 01:43:43.769959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.770000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.781983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef6cc8 00:22:13.319 [2024-12-16 01:43:43.784004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.784046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.795483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef6458 00:22:13.319 [2024-12-16 01:43:43.797480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.797523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.808801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef5be8 00:22:13.319 [2024-12-16 01:43:43.810894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.810923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.822318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef5378 00:22:13.319 [2024-12-16 01:43:43.824314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.824355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.835922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef4b08 00:22:13.319 [2024-12-16 01:43:43.837818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.837847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.849200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef4298 00:22:13.319 [2024-12-16 01:43:43.851218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.851258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.862765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef3a28 00:22:13.319 [2024-12-16 01:43:43.864629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.864660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.876218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef31b8 00:22:13.319 [2024-12-16 01:43:43.878381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.878411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.891063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef2948 00:22:13.319 [2024-12-16 01:43:43.893272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.893318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.907072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef20d8 00:22:13.319 [2024-12-16 01:43:43.909126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.909169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.922006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef1868 00:22:13.319 [2024-12-16 01:43:43.923962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.924005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.936352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef0ff8 00:22:13.319 [2024-12-16 01:43:43.938330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.938377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.950585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef0788 00:22:13.319 [2024-12-16 01:43:43.952568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.952639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:13.319 [2024-12-16 01:43:43.964821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeff18 00:22:13.319 [2024-12-16 01:43:43.966808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.319 [2024-12-16 01:43:43.966839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:43.980051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eef6a8 00:22:13.579 [2024-12-16 01:43:43.981888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:43.981917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:43.994625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeee38 00:22:13.579 [2024-12-16 01:43:43.996363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:43.996408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.008874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eee5c8 00:22:13.579 [2024-12-16 01:43:44.010784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.010812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.023349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eedd58 00:22:13.579 [2024-12-16 01:43:44.025205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.025249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.037738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eed4e8 00:22:13.579 [2024-12-16 01:43:44.039468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.039512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.051963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eecc78 00:22:13.579 [2024-12-16 01:43:44.053667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.053695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.066344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eec408 00:22:13.579 [2024-12-16 01:43:44.068170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.068212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.080197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eebb98 00:22:13.579 [2024-12-16 01:43:44.081889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.081918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.093676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeb328 00:22:13.579 [2024-12-16 01:43:44.095327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.095371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.107418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeaab8 00:22:13.579 [2024-12-16 01:43:44.109079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.109122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.120939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eea248 00:22:13.579 [2024-12-16 01:43:44.122581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.122631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.134341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee99d8 00:22:13.579 [2024-12-16 01:43:44.136001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.136042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.147832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee9168 00:22:13.579 [2024-12-16 01:43:44.149341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.149384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.161184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee88f8 00:22:13.579 [2024-12-16 01:43:44.162794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.162821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.174646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee8088 00:22:13.579 [2024-12-16 01:43:44.176168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.176210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.188178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee7818 00:22:13.579 [2024-12-16 01:43:44.189679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.189707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.201547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee6fa8 00:22:13.579 [2024-12-16 01:43:44.203045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.203088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.215918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee6738 00:22:13.579 [2024-12-16 01:43:44.217670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.217709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:13.579 [2024-12-16 01:43:44.232692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee5ec8 00:22:13.579 [2024-12-16 01:43:44.234577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.579 [2024-12-16 01:43:44.234617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.248553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee5658 00:22:13.839 [2024-12-16 01:43:44.250085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.250150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.262660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee4de8 00:22:13.839 [2024-12-16 01:43:44.264049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.264092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.276184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee4578 00:22:13.839 [2024-12-16 01:43:44.277600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.277667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.289705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee3d08 00:22:13.839 [2024-12-16 01:43:44.291149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.291191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.303399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee3498 00:22:13.839 [2024-12-16 01:43:44.304849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:11192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.304877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.317343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee2c28 00:22:13.839 [2024-12-16 01:43:44.318867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.318895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.331193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee23b8 00:22:13.839 [2024-12-16 01:43:44.332588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.332657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.345049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee1b48 00:22:13.839 [2024-12-16 01:43:44.346398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.346459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.358706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee12d8 00:22:13.839 [2024-12-16 01:43:44.360007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.360048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.372281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee0a68 00:22:13.839 [2024-12-16 01:43:44.373584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.373653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.385726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee01f8 00:22:13.839 [2024-12-16 01:43:44.387086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.387114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.399447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016edf988 00:22:13.839 [2024-12-16 01:43:44.400760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.400788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.412950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016edf118 00:22:13.839 [2024-12-16 01:43:44.414250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.414282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.426500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ede8a8 00:22:13.839 [2024-12-16 01:43:44.427728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.427771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:13.839 18091.00 IOPS, 70.67 MiB/s [2024-12-16T01:43:44.497Z] [2024-12-16 01:43:44.440159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ede038 00:22:13.839 [2024-12-16 01:43:44.441338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.839 [2024-12-16 01:43:44.441367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:13.839 [2024-12-16 01:43:44.459135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ede038 00:22:13.839 [2024-12-16 01:43:44.461398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.840 [2024-12-16 01:43:44.461442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:13.840 [2024-12-16 01:43:44.472852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ede8a8 00:22:13.840 [2024-12-16 01:43:44.475139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.840 [2024-12-16 01:43:44.475180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:13.840 [2024-12-16 01:43:44.487102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016edf118 00:22:13.840 [2024-12-16 01:43:44.489322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:13.840 [2024-12-16 01:43:44.489365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.501920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016edf988 00:22:14.099 [2024-12-16 01:43:44.504252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.504296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.515717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee01f8 00:22:14.099 [2024-12-16 01:43:44.517833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.517876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.529273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee0a68 00:22:14.099 [2024-12-16 01:43:44.531480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.531521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.542839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee12d8 00:22:14.099 [2024-12-16 01:43:44.544992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.545033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.556362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee1b48 00:22:14.099 [2024-12-16 01:43:44.558666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.558709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.569869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee23b8 00:22:14.099 [2024-12-16 01:43:44.572053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.572094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.583532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee2c28 00:22:14.099 [2024-12-16 01:43:44.585567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.585616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.596883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee3498 00:22:14.099 [2024-12-16 01:43:44.599107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.599148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.610494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee3d08 00:22:14.099 [2024-12-16 01:43:44.612584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.612627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.624055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee4578 00:22:14.099 [2024-12-16 01:43:44.626080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.626127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.637482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee4de8 00:22:14.099 [2024-12-16 01:43:44.639513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.639579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.650843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee5658 00:22:14.099 [2024-12-16 01:43:44.652848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.099 [2024-12-16 01:43:44.652875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.099 [2024-12-16 01:43:44.664291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee5ec8 00:22:14.100 [2024-12-16 01:43:44.666366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.100 [2024-12-16 01:43:44.666409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.100 [2024-12-16 01:43:44.677691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee6738 00:22:14.100 [2024-12-16 01:43:44.679690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.100 [2024-12-16 01:43:44.679732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.100 [2024-12-16 01:43:44.691179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee6fa8 00:22:14.100 [2024-12-16 01:43:44.693128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.100 [2024-12-16 01:43:44.693169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.100 [2024-12-16 01:43:44.704593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee7818 00:22:14.100 [2024-12-16 01:43:44.706619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.100 [2024-12-16 01:43:44.706662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.100 [2024-12-16 01:43:44.718104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee8088 00:22:14.100 [2024-12-16 01:43:44.720094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.100 [2024-12-16 01:43:44.720136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.100 [2024-12-16 01:43:44.731605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee88f8 00:22:14.100 [2024-12-16 01:43:44.733491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.100 [2024-12-16 01:43:44.733535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.100 [2024-12-16 01:43:44.744930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee9168 00:22:14.100 [2024-12-16 01:43:44.746936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.100 [2024-12-16 01:43:44.746993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.359 [2024-12-16 01:43:44.759897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ee99d8 00:22:14.359 [2024-12-16 01:43:44.761817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.359 [2024-12-16 01:43:44.761844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.359 [2024-12-16 01:43:44.773436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eea248 00:22:14.359 [2024-12-16 01:43:44.775472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.359 [2024-12-16 01:43:44.775515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.359 [2024-12-16 01:43:44.787010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeaab8 00:22:14.360 [2024-12-16 01:43:44.788867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.788893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.800439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeb328 00:22:14.360 [2024-12-16 01:43:44.802250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.802279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.813877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eebb98 00:22:14.360 [2024-12-16 01:43:44.815801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.815830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.827270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eec408 00:22:14.360 [2024-12-16 01:43:44.829106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.829148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.840821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eecc78 00:22:14.360 [2024-12-16 01:43:44.842713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.842741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.854204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eed4e8 00:22:14.360 [2024-12-16 01:43:44.856046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.856088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.867659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eedd58 00:22:14.360 [2024-12-16 01:43:44.869437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.869480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.882403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eee5c8 00:22:14.360 [2024-12-16 01:43:44.884354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.884396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.896539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeee38 00:22:14.360 [2024-12-16 01:43:44.898315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.898345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.909965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eef6a8 00:22:14.360 [2024-12-16 01:43:44.911805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.911832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.923653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016eeff18 00:22:14.360 [2024-12-16 01:43:44.925321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.925364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.937088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef0788 00:22:14.360 [2024-12-16 01:43:44.938864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.938891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.950546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef0ff8 00:22:14.360 [2024-12-16 01:43:44.952198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.952240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.964149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef1868 00:22:14.360 [2024-12-16 01:43:44.965838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.965865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.977533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef20d8 00:22:14.360 [2024-12-16 01:43:44.979218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.979261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:44.991115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef2948 00:22:14.360 [2024-12-16 01:43:44.992733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:44.992760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.360 [2024-12-16 01:43:45.004489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef31b8 00:22:14.360 [2024-12-16 01:43:45.006027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.360 [2024-12-16 01:43:45.006069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.621 [2024-12-16 01:43:45.019060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef3a28 00:22:14.621 [2024-12-16 01:43:45.020868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.621 [2024-12-16 01:43:45.020916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.621 [2024-12-16 01:43:45.032765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef4298 00:22:14.622 [2024-12-16 01:43:45.034336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.034382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.046240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef4b08 00:22:14.622 [2024-12-16 01:43:45.047878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.047905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.059694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef5378 00:22:14.622 [2024-12-16 01:43:45.061208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.061250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.074327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef5be8 00:22:14.622 [2024-12-16 01:43:45.076139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.076166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.090290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef6458 00:22:14.622 [2024-12-16 01:43:45.092001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.092044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.105384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef6cc8 00:22:14.622 [2024-12-16 01:43:45.107075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.107117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.119795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef7538 00:22:14.622 [2024-12-16 01:43:45.121423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.121484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.134227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef7da8 00:22:14.622 [2024-12-16 01:43:45.135892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.135922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.148506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef8618 00:22:14.622 [2024-12-16 01:43:45.150016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.150038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.162674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef8e88 00:22:14.622 [2024-12-16 01:43:45.164200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.164227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.176791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef96f8 00:22:14.622 [2024-12-16 01:43:45.178230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.178275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.191026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016ef9f68 00:22:14.622 [2024-12-16 01:43:45.192443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.192486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.205220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efa7d8 00:22:14.622 [2024-12-16 01:43:45.206771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.206801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.219758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efb048 00:22:14.622 [2024-12-16 01:43:45.221104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.221147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.234844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efb8b8 00:22:14.622 [2024-12-16 01:43:45.236306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.236350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.251405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc128 00:22:14.622 [2024-12-16 01:43:45.252978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.253004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.622 [2024-12-16 01:43:45.267108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc998 00:22:14.622 [2024-12-16 01:43:45.268486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.622 [2024-12-16 01:43:45.268529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.283253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efd208 00:22:14.882 [2024-12-16 01:43:45.284690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.284716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.297169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efda78 00:22:14.882 [2024-12-16 01:43:45.298596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.310852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efe2e8 00:22:14.882 [2024-12-16 01:43:45.312067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.312109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.324346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efeb58 00:22:14.882 [2024-12-16 01:43:45.325617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.325687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.343260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efef90 00:22:14.882 [2024-12-16 01:43:45.345544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.345595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.356803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efeb58 00:22:14.882 [2024-12-16 01:43:45.359105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.359145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.370292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efe2e8 00:22:14.882 [2024-12-16 01:43:45.372580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.372621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.383749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efda78 00:22:14.882 [2024-12-16 01:43:45.385928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.385970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.397168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efd208 00:22:14.882 [2024-12-16 01:43:45.399487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.399528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.410676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc998 00:22:14.882 [2024-12-16 01:43:45.412836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.412880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.882 [2024-12-16 01:43:45.424073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efc128 00:22:14.882 [2024-12-16 01:43:45.426310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.426338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.882 18154.00 IOPS, 70.91 MiB/s [2024-12-16T01:43:45.540Z] [2024-12-16 01:43:45.438476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b1d0) with pdu=0x200016efb8b8 00:22:14.882 [2024-12-16 01:43:45.440531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.882 [2024-12-16 01:43:45.440599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.882 00:22:14.882 Latency(us) 00:22:14.882 [2024-12-16T01:43:45.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.882 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:14.882 nvme0n1 : 2.01 18195.28 71.08 0.00 0.00 7028.42 5928.03 25618.62 00:22:14.882 [2024-12-16T01:43:45.540Z] =================================================================================================================== 00:22:14.882 [2024-12-16T01:43:45.540Z] Total : 18195.28 71.08 0.00 0.00 7028.42 5928.03 25618.62 00:22:14.882 { 00:22:14.882 "results": [ 00:22:14.882 { 00:22:14.882 "job": "nvme0n1", 00:22:14.882 "core_mask": "0x2", 00:22:14.882 "workload": "randwrite", 00:22:14.882 "status": "finished", 00:22:14.882 "queue_depth": 128, 00:22:14.882 "io_size": 4096, 00:22:14.882 "runtime": 2.009477, 00:22:14.882 "iops": 18195.28165786421, 00:22:14.882 "mibps": 71.07531897603207, 00:22:14.882 "io_failed": 0, 00:22:14.882 "io_timeout": 0, 00:22:14.882 "avg_latency_us": 7028.423672018161, 00:22:14.882 "min_latency_us": 5928.029090909091, 00:22:14.882 "max_latency_us": 25618.618181818183 00:22:14.882 } 00:22:14.882 ], 00:22:14.883 "core_count": 1 00:22:14.883 } 00:22:14.883 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:14.883 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:14.883 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:14.883 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:14.883 | .driver_specific 00:22:14.883 | .nvme_error 00:22:14.883 | .status_code 00:22:14.883 | .command_transient_transport_error' 00:22:15.140 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:22:15.140 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97683 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97683 ']' 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97683 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97683 00:22:15.141 killing process with pid 97683 00:22:15.141 Received shutdown signal, test time was about 2.000000 seconds 00:22:15.141 00:22:15.141 Latency(us) 00:22:15.141 [2024-12-16T01:43:45.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.141 [2024-12-16T01:43:45.799Z] =================================================================================================================== 00:22:15.141 [2024-12-16T01:43:45.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97683' 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97683 00:22:15.141 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97683 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=97730 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 97730 /var/tmp/bperf.sock 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 97730 ']' 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:15.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:15.401 01:43:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:15.401 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:15.401 Zero copy mechanism will not be used. 00:22:15.401 [2024-12-16 01:43:45.963164] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:15.401 [2024-12-16 01:43:45.963262] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97730 ] 00:22:15.694 [2024-12-16 01:43:46.110247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.694 [2024-12-16 01:43:46.129365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.694 [2024-12-16 01:43:46.157584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:15.694 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.694 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:22:15.694 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:15.694 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:15.960 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:15.960 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.960 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:15.960 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.960 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:15.960 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:16.218 nvme0n1 00:22:16.218 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:16.218 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.218 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:16.218 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.218 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:16.218 01:43:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:16.479 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:16.479 Zero copy mechanism will not be used. 00:22:16.479 Running I/O for 2 seconds... 00:22:16.479 [2024-12-16 01:43:46.881013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.881099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.881127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.885847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.885962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.885983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.890841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.890918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.890940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.895249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.895366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.895387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.899904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.900003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.900024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.904414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.904532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.904581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.909078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.909169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.909190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.913431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.913586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.913607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.918252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.918352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.918375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.922817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.922916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.922936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.927447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.927556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.927577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.931934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.932032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.932053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.936418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.936551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.936584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.941004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.941097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.941118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.945478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.945612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.945633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.949897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.949993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.950013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.954490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.954605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.954639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.958984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.959057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.959077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.963566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.963665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.963685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.968255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.968350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.968370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.972908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.973005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.973026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.977336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.977462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.977483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.982020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.982102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.982163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.986625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.986737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.986758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.991331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.991450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.991470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:46.995770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.479 [2024-12-16 01:43:46.995864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.479 [2024-12-16 01:43:46.995884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.479 [2024-12-16 01:43:47.000347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.000444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.000465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.004813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.004895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.004916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.009256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.009329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.009349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.013770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.013868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.013891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.018490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.018602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.018637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.022955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.023053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.023073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.027475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.027608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.027628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.031899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.031997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.032017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.036396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.036514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.036535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.040842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.040935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.045319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.045404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.045424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.049765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.049863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.049883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.054516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.054611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.054653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.059074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.059193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.059213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.063639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.063733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.063754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.068106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.068223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.068242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.072663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.072782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.072802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.077140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.077260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.077280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.081666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.081745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.081765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.086070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.086203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.086224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.090808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.090898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.090935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.095244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.095318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.095338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.099748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.099823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.099843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.104322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.104428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.104449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.108839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.108958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.108978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.113225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.113344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.113364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.117906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.118003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.118023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.122393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.480 [2024-12-16 01:43:47.122483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.480 [2024-12-16 01:43:47.122503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.480 [2024-12-16 01:43:47.126957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.481 [2024-12-16 01:43:47.127045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.481 [2024-12-16 01:43:47.127065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.481 [2024-12-16 01:43:47.131727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.481 [2024-12-16 01:43:47.131844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.481 [2024-12-16 01:43:47.131881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.136884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.136982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.137002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.141905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.142013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.142034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.146535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.146635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.151039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.151157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.151177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.155589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.155694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.155714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.160230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.160328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.160348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.165004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.165118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.165138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.169592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.169691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.169710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.174275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.174355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.174376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.178917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.178991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.179011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.183424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.183497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.183516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.187898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.187993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.188013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.192435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.192531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.192580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.196821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.196962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.196982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.201376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.201473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.201493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.205854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.205973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.205992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.210538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.210658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.210679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.215049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.215123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.215142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.219603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.219721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.219741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.224032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.224149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.224168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.228631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.228730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.228750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.233126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.233258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.233278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.237570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.237698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.237718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.241980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.242077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.242097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.246512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.741 [2024-12-16 01:43:47.246597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.741 [2024-12-16 01:43:47.246629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.741 [2024-12-16 01:43:47.251014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.251088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.251108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.255553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.255643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.255664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.260049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.260145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.260164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.264556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.264654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.264675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.269077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.269173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.269193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.273608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.273666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.273686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.278399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.278488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.278523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.283439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.283555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.283593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.288920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.289035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.289055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.294363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.294501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.294550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.299381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.299500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.299520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.304397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.304517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.304553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.309390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.309507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.309527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.314416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.314596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.314619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.319610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.319710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.319731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.324355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.324452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.324472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.329008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.329104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.329124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.333619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.333724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.333744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.338185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.338305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.338329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.342810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.342900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.342920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.347243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.347340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.347360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.351790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.351888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.351909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.356546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.356635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.356655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.361001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.361104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.361123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.365645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.365729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.365749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.370261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.370364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.370387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.375006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.375103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.375123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.379478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.742 [2024-12-16 01:43:47.379605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.742 [2024-12-16 01:43:47.379625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:16.742 [2024-12-16 01:43:47.384087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.743 [2024-12-16 01:43:47.384184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.743 [2024-12-16 01:43:47.384203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:16.743 [2024-12-16 01:43:47.388652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.743 [2024-12-16 01:43:47.388757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.743 [2024-12-16 01:43:47.388777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:16.743 [2024-12-16 01:43:47.393505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:16.743 [2024-12-16 01:43:47.393653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:16.743 [2024-12-16 01:43:47.393689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.002 [2024-12-16 01:43:47.398766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.398840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.398859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.403682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.403755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.403775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.408159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.408276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.408295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.412754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.412844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.412864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.417392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.417490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.417511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.422037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.422163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.422185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.426540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.426650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.426670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.431176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.431280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.431299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.435661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.435738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.435758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.440224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.440323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.440343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.444765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.444863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.444883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.449219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.449338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.449358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.453748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.453845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.453865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.458493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.458593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.458629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.463319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.463417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.463437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.468270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.468369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.468389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.473378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.473491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.473511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.478832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.478956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.478977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.484028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.484126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.484146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.489174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.489274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.489294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.494042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.494183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.494206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.499125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.499224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.499244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.504024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.504098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.504119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.508921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.509003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.509023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.513644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.513744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.513765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.518337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.518422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.518458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.523049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.523147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.003 [2024-12-16 01:43:47.523168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.003 [2024-12-16 01:43:47.527846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.003 [2024-12-16 01:43:47.527946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.527966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.532444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.532554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.532574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.537150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.537249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.537268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.541793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.541891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.541911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.546819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.546893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.546913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.551421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.551518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.551566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.556080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.556178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.556198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.560672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.560764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.560785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.565436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.565511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.565531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.570087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.570205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.570226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.574914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.575013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.575033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.579579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.579703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.579722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.584344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.584424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.584444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.589157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.589294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.589315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.593862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.593952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.593972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.598542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.598642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.598662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.603274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.603387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.603408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.608161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.608260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.608281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.612799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.612919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.612939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.617710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.617809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.617829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.622719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.622816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.622837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.627419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.627519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.627551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.632016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.632114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.632134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.636890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.637004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.637024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.641642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.641719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.641739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.646226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.646329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.651023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.651121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.651141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.004 [2024-12-16 01:43:47.656003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.004 [2024-12-16 01:43:47.656097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.004 [2024-12-16 01:43:47.656118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.264 [2024-12-16 01:43:47.661043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.264 [2024-12-16 01:43:47.661166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.264 [2024-12-16 01:43:47.661186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.264 [2024-12-16 01:43:47.666069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.264 [2024-12-16 01:43:47.666202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.264 [2024-12-16 01:43:47.666225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.264 [2024-12-16 01:43:47.671166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.264 [2024-12-16 01:43:47.671265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.264 [2024-12-16 01:43:47.671286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.264 [2024-12-16 01:43:47.676023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.264 [2024-12-16 01:43:47.676142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.264 [2024-12-16 01:43:47.676162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.264 [2024-12-16 01:43:47.680718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.264 [2024-12-16 01:43:47.680819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.264 [2024-12-16 01:43:47.680840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.264 [2024-12-16 01:43:47.685423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.685578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.685599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.690368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.690482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.690517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.695060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.695160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.695180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.699784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.699913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.699932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.704274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.704392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.704411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.708821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.708919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.708955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.713431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.713531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.713568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.718020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.718141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.718177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.722573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.722658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.722677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.727065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.727167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.727187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.731640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.731716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.731735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.736091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.736188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.736208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.740664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.740764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.740784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.745256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.745329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.745349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.749789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.749887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.749907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.754525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.754609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.754641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.758978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.759097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.759117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.763556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.763675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.763695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.767983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.768081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.768101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.772593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.772699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.772718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.777055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.777156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.777175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.781664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.781741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.781761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.786064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.786211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.786231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.790683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.790803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.790823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.795125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.795244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.795264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.799653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.799726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.799745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.804178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.804250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.804270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.808674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.265 [2024-12-16 01:43:47.808773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.265 [2024-12-16 01:43:47.808793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.265 [2024-12-16 01:43:47.813098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.813171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.813191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.817758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.817832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.817851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.822190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.822311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.822330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.826843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.826941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.826961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.831271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.831390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.831410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.835881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.835977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.835997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.840395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.840514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.845003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.845084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.845103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.849442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.849517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.849537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.853883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.853981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.854000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.858583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.858686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.858706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.863073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.863192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.863212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.867659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.867778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.867798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.872305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.872402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.872422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.876862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.876975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.876996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.266 6654.00 IOPS, 831.75 MiB/s [2024-12-16T01:43:47.924Z] [2024-12-16 01:43:47.882349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.882433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.882468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.887014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.887117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.887137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.891583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.891686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.891706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.896126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.896201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.896221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.900635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.900756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.900776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.905192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.905266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.905285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.909642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.909738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.909758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.914027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.914187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.914209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.266 [2024-12-16 01:43:47.919203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.266 [2024-12-16 01:43:47.919277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.266 [2024-12-16 01:43:47.919296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.924016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.924134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.924153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.928851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.928964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.928984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.933402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.933521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.933541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.937955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.938053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.938076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.942743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.942840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.942861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.947230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.947305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.947325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.951818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.951915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.951935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.956359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.956448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.956468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.960942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.961056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.961075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.965383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.965482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.965502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.970144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.970271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.970293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.974708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.974792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.974812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.979300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.979397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.979416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.983719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.983818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.983838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.988214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.988334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.527 [2024-12-16 01:43:47.988354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.527 [2024-12-16 01:43:47.992702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.527 [2024-12-16 01:43:47.992799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:47.992819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:47.997215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:47.997334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:47.997354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.001691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.001804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.001824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.006185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.006273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.006294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.010755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.010854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.010874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.015392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.015505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.015525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.019956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.020052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.020072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.024342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.024460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.024479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.028934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.029032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.029052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.033350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.033423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.033443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.037833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.037929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.037949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.042531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.042632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.042653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.047080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.047206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.047226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.051495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.051627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.051647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.056175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.056307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.056327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.060692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.060775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.060795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.065176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.065293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.065313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.069661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.069735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.069755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.074082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.074245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.074266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.078622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.078725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.078745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.083123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.083239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.083259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.087521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.087646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.087666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.092166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.092283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.092303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.096739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.096836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.096856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.101251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.101347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.101367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.105778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.105867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.105887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.110206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.110315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.110335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.114706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.114779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.114799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.119276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.119350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.119369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.123762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.123860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.528 [2024-12-16 01:43:48.123880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.528 [2024-12-16 01:43:48.128436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.528 [2024-12-16 01:43:48.128533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.128569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.132971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.133074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.133093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.137514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.137623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.137643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.141960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.142063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.142082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.146679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.146799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.146819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.151134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.151252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.151271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.155729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.155803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.155823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.160286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.160359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.160379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.164706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.164804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.164824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.169167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.169269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.169288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.173824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.173898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.173919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.529 [2024-12-16 01:43:48.178556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.529 [2024-12-16 01:43:48.178671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.529 [2024-12-16 01:43:48.178692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.183749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.183848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.183869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.188518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.188634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.188655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.193232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.193358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.193378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.197878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.197977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.197997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.202756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.202848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.202868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.207327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.207424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.207445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.211919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.212032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.212051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.216561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.216676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.216696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.221142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.221258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.221278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.225651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.225754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.225774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.230262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.230360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.230381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.235009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.235121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.235141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.239661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.239766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.239787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.244353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.244443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.244463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.248970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.249088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.249108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.253378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.253504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.253524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.258011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.258192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.258213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.262693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.262768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.262788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.267177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.267251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.267271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.271729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.271804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.271824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.276222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.276320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.276340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.280664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.280751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.280770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.285086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.285209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.285229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.289501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.289634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.289654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.294341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.294464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.294485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.299301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.299424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.299444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.304314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.304428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.304448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.309590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.309685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.309707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.315236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.315347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.315367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.320435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.320550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.320590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.325493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.325650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.325672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.330518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.788 [2024-12-16 01:43:48.330661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.788 [2024-12-16 01:43:48.330682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.788 [2024-12-16 01:43:48.335449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.335625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.335646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.340280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.340377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.340396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.345081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.345205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.345224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.349689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.349773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.349793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.354386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.354530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.354550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.359095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.359212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.359231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.363709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.363808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.363829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.368215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.368290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.368310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.372771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.372867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.372888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.377210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.377305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.377326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.381752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.381848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.381868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.386172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.386296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.386317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.390783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.390880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.390900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.395309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.395388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.395408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.399785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.399886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.399907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.404248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.404366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.404386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.408815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.408913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.408933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.413243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.413333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.413352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.417830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.417927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.417948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.422269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.422353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.422374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.426839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.426913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.426932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.431274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.431372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.431392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.435856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.435973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.435993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:17.789 [2024-12-16 01:43:48.440452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:17.789 [2024-12-16 01:43:48.440554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.789 [2024-12-16 01:43:48.440574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.445627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.445736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.445756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.450631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.450754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.450774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.455204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.455321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.455341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.459709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.459823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.459844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.464329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.464454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.464473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.468819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.468914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.468934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.473297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.473393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.473413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.477884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.477977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.477998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.482710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.482801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.482822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.487257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.487357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.487376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.491968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.492066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.492085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.496609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.496709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.496729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.501312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.501410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.501430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.505963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.506052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.506073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.510854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.510927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.510947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.515449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.515572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.515594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.520320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.520420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.520440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.524979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.525076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.525096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.529643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.529734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.529754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.534263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.534351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.534373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.539005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.539096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.543571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.543692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.543712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.548216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.548337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.548357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.552712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.552786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.552805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.557148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.557267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.557287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.561653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.561744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.561765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.566380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.566496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.566532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.570900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.570974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.570993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.575365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.575460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.575480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.579833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.579931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.579966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.584470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.584621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.584641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.589705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.589804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.589824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.594281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.594370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.594392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.598905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.599001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.599021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.603453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.603577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.603598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.607942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.608039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.608058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.612498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.612610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.612629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.617141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.617239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.617259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.621768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.621865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.621885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.626269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.049 [2024-12-16 01:43:48.626351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.049 [2024-12-16 01:43:48.626373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.049 [2024-12-16 01:43:48.630895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.630976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.630996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.635329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.635448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.635467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.640003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.640101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.640121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.644576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.644648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.644668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.649070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.649197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.649216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.653516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.653651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.653671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.658262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.658350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.658372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.662801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.662899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.662919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.667482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.667592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.667611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.671906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.672004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.672024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.676812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.676925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.676960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.681607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.681704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.681725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.686474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.686617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.686638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.691726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.691827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.691847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.696746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.696844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.696865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.050 [2024-12-16 01:43:48.702218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.050 [2024-12-16 01:43:48.702311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.050 [2024-12-16 01:43:48.702334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.707710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.707799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.309 [2024-12-16 01:43:48.707820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.712948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.713048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.309 [2024-12-16 01:43:48.713069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.717996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.718087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.309 [2024-12-16 01:43:48.718107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.722911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.723010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.309 [2024-12-16 01:43:48.723030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.727739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.727838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.309 [2024-12-16 01:43:48.727858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.732344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.732442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.309 [2024-12-16 01:43:48.732462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.737040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.737141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.309 [2024-12-16 01:43:48.737161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.309 [2024-12-16 01:43:48.741642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.309 [2024-12-16 01:43:48.741721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.741741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.746640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.746718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.746737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.751200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.751275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.751295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.755867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.755965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.755985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.760531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.760641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.760661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.765407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.765491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.765511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.770216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.770298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.770322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.774854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.774952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.774972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.779483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.779571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.779591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.784233] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.784310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.784330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.788903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.788986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.789006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.793488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.793591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.793612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.798077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.798223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.798245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.802841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.802953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.802972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.807703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.807802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.807822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.812325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.812399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.812419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.817021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.817143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.817162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.821735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.821836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.821856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.826628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.826703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.826723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.831236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.831335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.831354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.835948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.836035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.836055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.840705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.840797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.840817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.845322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.845422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.845442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.849949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.850047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.850067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.854709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.854849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.854870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.859520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.859656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.859676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.310 [2024-12-16 01:43:48.864197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.310 [2024-12-16 01:43:48.864295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.310 [2024-12-16 01:43:48.864315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.311 [2024-12-16 01:43:48.868827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.311 [2024-12-16 01:43:48.868920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.311 [2024-12-16 01:43:48.868940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.311 [2024-12-16 01:43:48.873927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.311 [2024-12-16 01:43:48.874027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.311 [2024-12-16 01:43:48.874049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.311 [2024-12-16 01:43:48.878748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x127b510) with pdu=0x200016eff3c8 00:22:18.311 [2024-12-16 01:43:48.878852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.311 [2024-12-16 01:43:48.878872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.311 6652.50 IOPS, 831.56 MiB/s 00:22:18.311 Latency(us) 00:22:18.311 [2024-12-16T01:43:48.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.311 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:18.311 nvme0n1 : 2.00 6650.47 831.31 0.00 0.00 2400.47 1482.01 5868.45 00:22:18.311 [2024-12-16T01:43:48.969Z] =================================================================================================================== 00:22:18.311 [2024-12-16T01:43:48.969Z] Total : 6650.47 831.31 0.00 0.00 2400.47 1482.01 5868.45 00:22:18.311 { 00:22:18.311 "results": [ 00:22:18.311 { 00:22:18.311 "job": "nvme0n1", 00:22:18.311 "core_mask": "0x2", 00:22:18.311 "workload": "randwrite", 00:22:18.311 "status": "finished", 00:22:18.311 "queue_depth": 16, 00:22:18.311 "io_size": 131072, 00:22:18.311 "runtime": 2.004069, 00:22:18.311 "iops": 6650.469619559007, 00:22:18.311 "mibps": 831.3087024448758, 00:22:18.311 "io_failed": 0, 00:22:18.311 "io_timeout": 0, 00:22:18.311 "avg_latency_us": 2400.468667466987, 00:22:18.311 "min_latency_us": 1482.0072727272727, 00:22:18.311 "max_latency_us": 5868.450909090909 00:22:18.311 } 00:22:18.311 ], 00:22:18.311 "core_count": 1 00:22:18.311 } 00:22:18.311 01:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:18.311 01:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:18.311 01:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:18.311 01:43:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:18.311 | .driver_specific 00:22:18.311 | .nvme_error 00:22:18.311 | .status_code 00:22:18.311 | .command_transient_transport_error' 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 430 > 0 )) 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 97730 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97730 ']' 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97730 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97730 00:22:18.570 killing process with pid 97730 00:22:18.570 Received shutdown signal, test time was about 2.000000 seconds 00:22:18.570 00:22:18.570 Latency(us) 00:22:18.570 [2024-12-16T01:43:49.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.570 [2024-12-16T01:43:49.228Z] =================================================================================================================== 00:22:18.570 [2024-12-16T01:43:49.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97730' 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97730 00:22:18.570 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97730 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 97563 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 97563 ']' 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 97563 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97563 00:22:18.828 killing process with pid 97563 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.828 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.829 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97563' 00:22:18.829 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 97563 00:22:18.829 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 97563 00:22:18.829 ************************************ 00:22:18.829 END TEST nvmf_digest_error 00:22:18.829 ************************************ 00:22:18.829 00:22:18.829 real 0m14.310s 00:22:18.829 user 0m27.676s 00:22:18.829 sys 0m4.337s 00:22:18.829 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.829 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.088 rmmod nvme_tcp 00:22:19.088 rmmod nvme_fabrics 00:22:19.088 rmmod nvme_keyring 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 97563 ']' 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 97563 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 97563 ']' 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 97563 00:22:19.088 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (97563) - No such process 00:22:19.088 Process with pid 97563 is not found 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 97563 is not found' 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:19.088 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:22:19.347 00:22:19.347 real 0m30.557s 00:22:19.347 user 0m57.437s 00:22:19.347 sys 0m9.136s 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.347 ************************************ 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:22:19.347 END TEST nvmf_digest 00:22:19.347 ************************************ 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:19.347 ************************************ 00:22:19.347 START TEST nvmf_host_multipath 00:22:19.347 ************************************ 00:22:19.347 01:43:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:22:19.347 * Looking for test storage... 00:22:19.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:19.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.607 --rc genhtml_branch_coverage=1 00:22:19.607 --rc genhtml_function_coverage=1 00:22:19.607 --rc genhtml_legend=1 00:22:19.607 --rc geninfo_all_blocks=1 00:22:19.607 --rc geninfo_unexecuted_blocks=1 00:22:19.607 00:22:19.607 ' 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:19.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.607 --rc genhtml_branch_coverage=1 00:22:19.607 --rc genhtml_function_coverage=1 00:22:19.607 --rc genhtml_legend=1 00:22:19.607 --rc geninfo_all_blocks=1 00:22:19.607 --rc geninfo_unexecuted_blocks=1 00:22:19.607 00:22:19.607 ' 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:19.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.607 --rc genhtml_branch_coverage=1 00:22:19.607 --rc genhtml_function_coverage=1 00:22:19.607 --rc genhtml_legend=1 00:22:19.607 --rc geninfo_all_blocks=1 00:22:19.607 --rc geninfo_unexecuted_blocks=1 00:22:19.607 00:22:19.607 ' 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:19.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:19.607 --rc genhtml_branch_coverage=1 00:22:19.607 --rc genhtml_function_coverage=1 00:22:19.607 --rc genhtml_legend=1 00:22:19.607 --rc geninfo_all_blocks=1 00:22:19.607 --rc geninfo_unexecuted_blocks=1 00:22:19.607 00:22:19.607 ' 00:22:19.607 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:19.608 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:19.608 Cannot find device "nvmf_init_br" 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:19.608 Cannot find device "nvmf_init_br2" 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:19.608 Cannot find device "nvmf_tgt_br" 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:19.608 Cannot find device "nvmf_tgt_br2" 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:19.608 Cannot find device "nvmf_init_br" 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:19.608 Cannot find device "nvmf_init_br2" 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:19.608 Cannot find device "nvmf_tgt_br" 00:22:19.608 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:22:19.609 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:19.609 Cannot find device "nvmf_tgt_br2" 00:22:19.609 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:22:19.609 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:19.609 Cannot find device "nvmf_br" 00:22:19.609 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:22:19.609 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:19.609 Cannot find device "nvmf_init_if" 00:22:19.609 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:22:19.609 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:19.867 Cannot find device "nvmf_init_if2" 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:19.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:19.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:19.867 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:20.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:20.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:22:20.126 00:22:20.126 --- 10.0.0.3 ping statistics --- 00:22:20.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.126 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:20.126 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:20.126 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:22:20.126 00:22:20.126 --- 10.0.0.4 ping statistics --- 00:22:20.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.126 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:20.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:20.126 00:22:20.126 --- 10.0.0.1 ping statistics --- 00:22:20.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.126 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:20.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:22:20.126 00:22:20.126 --- 10.0.0.2 ping statistics --- 00:22:20.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.126 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.126 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=98037 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 98037 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 98037 ']' 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.127 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:20.127 [2024-12-16 01:43:50.672397] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:20.127 [2024-12-16 01:43:50.672489] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.385 [2024-12-16 01:43:50.826817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:20.385 [2024-12-16 01:43:50.852157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.386 [2024-12-16 01:43:50.852216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.386 [2024-12-16 01:43:50.852229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.386 [2024-12-16 01:43:50.852240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.386 [2024-12-16 01:43:50.852248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.386 [2024-12-16 01:43:50.853354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.386 [2024-12-16 01:43:50.853374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.386 [2024-12-16 01:43:50.891041] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=98037 00:22:20.386 01:43:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:20.645 [2024-12-16 01:43:51.277319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.645 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:21.212 Malloc0 00:22:21.212 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:21.212 01:43:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:21.471 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:21.730 [2024-12-16 01:43:52.275772] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:21.730 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:21.989 [2024-12-16 01:43:52.499950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=98084 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 98084 /var/tmp/bdevperf.sock 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 98084 ']' 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.989 01:43:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:22.925 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:22.925 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:22:22.925 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:23.183 01:43:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:23.441 Nvme0n1 00:22:23.698 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:23.956 Nvme0n1 00:22:23.956 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:23.956 01:43:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:22:24.890 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:22:24.890 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:25.148 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:25.407 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:22:25.407 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98125 00:22:25.407 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98037 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:25.407 01:43:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:31.966 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:31.966 01:44:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.966 Attaching 4 probes... 00:22:31.966 @path[10.0.0.3, 4421]: 15093 00:22:31.966 @path[10.0.0.3, 4421]: 20363 00:22:31.966 @path[10.0.0.3, 4421]: 20800 00:22:31.966 @path[10.0.0.3, 4421]: 20583 00:22:31.966 @path[10.0.0.3, 4421]: 20597 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98125 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:31.966 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:32.225 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:32.225 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98244 00:22:32.225 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:32.225 01:44:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98037 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:38.786 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:38.786 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:38.786 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:38.786 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.786 Attaching 4 probes... 00:22:38.786 @path[10.0.0.3, 4420]: 20427 00:22:38.786 @path[10.0.0.3, 4420]: 20451 00:22:38.786 @path[10.0.0.3, 4420]: 20645 00:22:38.786 @path[10.0.0.3, 4420]: 20660 00:22:38.786 @path[10.0.0.3, 4420]: 20633 00:22:38.786 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:38.786 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:38.786 01:44:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98244 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:38.786 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:39.044 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:39.044 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98356 00:22:39.044 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98037 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:39.044 01:44:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:45.613 Attaching 4 probes... 00:22:45.613 @path[10.0.0.3, 4421]: 15704 00:22:45.613 @path[10.0.0.3, 4421]: 20470 00:22:45.613 @path[10.0.0.3, 4421]: 20549 00:22:45.613 @path[10.0.0.3, 4421]: 20784 00:22:45.613 @path[10.0.0.3, 4421]: 20623 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98356 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:45.613 01:44:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:45.613 01:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:45.873 01:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:45.873 01:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98469 00:22:45.873 01:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98037 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:45.873 01:44:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.493 Attaching 4 probes... 00:22:52.493 00:22:52.493 00:22:52.493 00:22:52.493 00:22:52.493 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98469 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:52.493 01:44:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:52.751 01:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:52.751 01:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98037 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:52.751 01:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98590 00:22:52.751 01:44:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.319 Attaching 4 probes... 00:22:59.319 @path[10.0.0.3, 4421]: 19607 00:22:59.319 @path[10.0.0.3, 4421]: 20206 00:22:59.319 @path[10.0.0.3, 4421]: 20152 00:22:59.319 @path[10.0.0.3, 4421]: 20104 00:22:59.319 @path[10.0.0.3, 4421]: 20400 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98590 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:59.319 01:44:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:23:00.256 01:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:00.256 01:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98709 00:23:00.256 01:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98037 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:00.256 01:44:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:06.825 01:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:06.825 01:44:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:06.825 Attaching 4 probes... 00:23:06.825 @path[10.0.0.3, 4420]: 19600 00:23:06.825 @path[10.0.0.3, 4420]: 19811 00:23:06.825 @path[10.0.0.3, 4420]: 19726 00:23:06.825 @path[10.0.0.3, 4420]: 19702 00:23:06.825 @path[10.0.0.3, 4420]: 19686 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98709 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:06.825 [2024-12-16 01:44:37.329269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:06.825 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:07.084 01:44:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:23:13.651 01:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:23:13.651 01:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=98889 00:23:13.651 01:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98037 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:13.651 01:44:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.230 Attaching 4 probes... 00:23:20.230 @path[10.0.0.3, 4421]: 19586 00:23:20.230 @path[10.0.0.3, 4421]: 20077 00:23:20.230 @path[10.0.0.3, 4421]: 19998 00:23:20.230 @path[10.0.0.3, 4421]: 20120 00:23:20.230 @path[10.0.0.3, 4421]: 20072 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 98889 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 98084 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 98084 ']' 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 98084 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98084 00:23:20.230 killing process with pid 98084 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98084' 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 98084 00:23:20.230 01:44:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 98084 00:23:20.230 { 00:23:20.230 "results": [ 00:23:20.230 { 00:23:20.230 "job": "Nvme0n1", 00:23:20.230 "core_mask": "0x4", 00:23:20.230 "workload": "verify", 00:23:20.230 "status": "terminated", 00:23:20.230 "verify_range": { 00:23:20.230 "start": 0, 00:23:20.230 "length": 16384 00:23:20.230 }, 00:23:20.230 "queue_depth": 128, 00:23:20.230 "io_size": 4096, 00:23:20.230 "runtime": 55.515743, 00:23:20.230 "iops": 8482.548815027118, 00:23:20.230 "mibps": 33.13495630869968, 00:23:20.230 "io_failed": 0, 00:23:20.230 "io_timeout": 0, 00:23:20.230 "avg_latency_us": 15059.469762051249, 00:23:20.230 "min_latency_us": 165.70181818181817, 00:23:20.230 "max_latency_us": 7046430.72 00:23:20.230 } 00:23:20.230 ], 00:23:20.230 "core_count": 1 00:23:20.230 } 00:23:20.230 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 98084 00:23:20.230 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:20.230 [2024-12-16 01:43:52.579084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:20.230 [2024-12-16 01:43:52.579181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98084 ] 00:23:20.230 [2024-12-16 01:43:52.763188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.230 [2024-12-16 01:43:52.787430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.230 [2024-12-16 01:43:52.826941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:20.230 Running I/O for 90 seconds... 00:23:20.230 7829.00 IOPS, 30.58 MiB/s [2024-12-16T01:44:50.888Z] 7710.00 IOPS, 30.12 MiB/s [2024-12-16T01:44:50.888Z] 7742.33 IOPS, 30.24 MiB/s [2024-12-16T01:44:50.888Z] 8358.25 IOPS, 32.65 MiB/s [2024-12-16T01:44:50.889Z] 8770.20 IOPS, 34.26 MiB/s [2024-12-16T01:44:50.889Z] 9024.17 IOPS, 35.25 MiB/s [2024-12-16T01:44:50.889Z] 9210.43 IOPS, 35.98 MiB/s [2024-12-16T01:44:50.889Z] 9323.12 IOPS, 36.42 MiB/s [2024-12-16T01:44:50.889Z] [2024-12-16 01:44:02.704045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.231 [2024-12-16 01:44:02.704674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.704982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.704997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:20.231 [2024-12-16 01:44:02.705706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.231 [2024-12-16 01:44:02.705719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.705751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.705783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.705815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.705847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.705879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.705911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.705943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.705968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.705983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.232 [2024-12-16 01:44:02.706901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.706961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.706980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.707001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.707018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.707039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.707052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.707071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.707084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.707103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.707116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.707134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.707147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:20.232 [2024-12-16 01:44:02.707166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.232 [2024-12-16 01:44:02.707179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.707633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.707978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.707991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.708009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.708022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.708040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.708056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.708075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.708090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.708109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.708122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.233 [2024-12-16 01:44:02.709446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.233 [2024-12-16 01:44:02.709787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:20.233 [2024-12-16 01:44:02.709807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:02.709820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:02.709839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:02.709851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:02.709870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:02.709891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:02.709914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:02.709931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:02.709951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:02.709968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:20.234 9373.56 IOPS, 36.62 MiB/s [2024-12-16T01:44:50.892Z] 9469.80 IOPS, 36.99 MiB/s [2024-12-16T01:44:50.892Z] 9549.27 IOPS, 37.30 MiB/s [2024-12-16T01:44:50.892Z] 9614.67 IOPS, 37.56 MiB/s [2024-12-16T01:44:50.892Z] 9669.69 IOPS, 37.77 MiB/s [2024-12-16T01:44:50.892Z] 9716.14 IOPS, 37.95 MiB/s [2024-12-16T01:44:50.892Z] [2024-12-16 01:44:09.245903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.245960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.246031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.246065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.246095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.246152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.246185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.246216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.234 [2024-12-16 01:44:09.246247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:20.234 [2024-12-16 01:44:09.246945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.234 [2024-12-16 01:44:09.246957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.246975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.246988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.235 [2024-12-16 01:44:09.247930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.247978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.247991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.248016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.248030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.248048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.248061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.248079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.248091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.248109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.248121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.248139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.248152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.248169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.235 [2024-12-16 01:44:09.248181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:20.235 [2024-12-16 01:44:09.248199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.248702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.248981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.248993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.236 [2024-12-16 01:44:09.249361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.249392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.249423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.249454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:20.236 [2024-12-16 01:44:09.249472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.236 [2024-12-16 01:44:09.249485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.249503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:09.249515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.249534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:09.249560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.249582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:09.249596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:09.250256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.250967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.250980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:09.251004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:09.251017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:20.237 9577.73 IOPS, 37.41 MiB/s [2024-12-16T01:44:50.895Z] 9128.62 IOPS, 35.66 MiB/s [2024-12-16T01:44:50.895Z] 9198.12 IOPS, 35.93 MiB/s [2024-12-16T01:44:50.895Z] 9256.83 IOPS, 36.16 MiB/s [2024-12-16T01:44:50.895Z] 9311.68 IOPS, 36.37 MiB/s [2024-12-16T01:44:50.895Z] 9360.80 IOPS, 36.57 MiB/s [2024-12-16T01:44:50.895Z] 9405.43 IOPS, 36.74 MiB/s [2024-12-16T01:44:50.895Z] [2024-12-16 01:44:16.362862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.362934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.363019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.363053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.363084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.363135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.363171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.363201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.237 [2024-12-16 01:44:16.363231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:16.363261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:16.363290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:16.363322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:16.363352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:16.363381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.237 [2024-12-16 01:44:16.363411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:20.237 [2024-12-16 01:44:16.363429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.363766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.363935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.363969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.363988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.238 [2024-12-16 01:44:16.364418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.238 [2024-12-16 01:44:16.364821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:20.238 [2024-12-16 01:44:16.364840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.364853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.364871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.364891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.364911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.364924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.364943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.364956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.364979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.364994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.239 [2024-12-16 01:44:16.365772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:20.239 [2024-12-16 01:44:16.365966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.239 [2024-12-16 01:44:16.365978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.365997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.366353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.366386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.366419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.366470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.366523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.366566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.366610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.366632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.366664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.240 [2024-12-16 01:44:16.367314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.367968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.367982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.368006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.368019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.368043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.368056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.368080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.368093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:20.240 [2024-12-16 01:44:16.368117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.240 [2024-12-16 01:44:16.368130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:20.240 9376.09 IOPS, 36.63 MiB/s [2024-12-16T01:44:50.898Z] 8968.43 IOPS, 35.03 MiB/s [2024-12-16T01:44:50.898Z] 8594.75 IOPS, 33.57 MiB/s [2024-12-16T01:44:50.898Z] 8250.96 IOPS, 32.23 MiB/s [2024-12-16T01:44:50.898Z] 7933.62 IOPS, 30.99 MiB/s [2024-12-16T01:44:50.898Z] 7639.78 IOPS, 29.84 MiB/s [2024-12-16T01:44:50.898Z] 7366.93 IOPS, 28.78 MiB/s [2024-12-16T01:44:50.898Z] 7151.79 IOPS, 27.94 MiB/s [2024-12-16T01:44:50.899Z] 7241.13 IOPS, 28.29 MiB/s [2024-12-16T01:44:50.899Z] 7332.97 IOPS, 28.64 MiB/s [2024-12-16T01:44:50.899Z] 7418.94 IOPS, 28.98 MiB/s [2024-12-16T01:44:50.899Z] 7501.39 IOPS, 29.30 MiB/s [2024-12-16T01:44:50.899Z] 7578.65 IOPS, 29.60 MiB/s [2024-12-16T01:44:50.899Z] 7646.40 IOPS, 29.87 MiB/s [2024-12-16T01:44:50.899Z] [2024-12-16 01:44:29.782771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.782824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.782894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.782934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.782957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.782970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.782988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.241 [2024-12-16 01:44:29.783925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.241 [2024-12-16 01:44:29.783950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.241 [2024-12-16 01:44:29.783964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.783975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.783989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.784532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.784983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.784997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.785009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.785035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.785075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.785100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.242 [2024-12-16 01:44:29.785142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.785183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.785208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.785234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.785259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.785284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.242 [2024-12-16 01:44:29.785317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.242 [2024-12-16 01:44:29.785330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.785356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.785380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:20.243 [2024-12-16 01:44:29.785975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.785989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.243 [2024-12-16 01:44:29.786457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24487a0 is same with the state(6) to be set 00:23:20.243 [2024-12-16 01:44:29.786502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.243 [2024-12-16 01:44:29.786511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.243 [2024-12-16 01:44:29.786521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67144 len:8 PRP1 0x0 PRP2 0x0 00:23:20.243 [2024-12-16 01:44:29.786548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.243 [2024-12-16 01:44:29.786562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67664 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67672 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67680 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67688 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786771] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67696 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67704 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67712 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67720 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.786955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67152 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.786966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.786984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.786993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.787003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67160 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.787014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.787026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.787034] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.787043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67168 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.787054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.787068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.787077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.787086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67176 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.787098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.787109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.787118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.787127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67184 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.787138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.787150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.787159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.799637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67192 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.799680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.799703] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.799716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.799730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67200 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.799746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.799763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:20.244 [2024-12-16 01:44:29.799774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:20.244 [2024-12-16 01:44:29.799787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67208 len:8 PRP1 0x0 PRP2 0x0 00:23:20.244 [2024-12-16 01:44:29.799802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.800058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-12-16 01:44:29.800107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.800136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-12-16 01:44:29.800149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.800178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-12-16 01:44:29.800189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.800201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-12-16 01:44:29.800212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.800224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:20.244 [2024-12-16 01:44:29.800236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:20.244 [2024-12-16 01:44:29.800253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bf50 is same with the state(6) to be set 00:23:20.244 [2024-12-16 01:44:29.801319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:20.244 [2024-12-16 01:44:29.801356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242bf50 (9): Bad file descriptor 00:23:20.244 [2024-12-16 01:44:29.801818] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.244 [2024-12-16 01:44:29.801854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x242bf50 with addr=10.0.0.3, port=4421 00:23:20.244 [2024-12-16 01:44:29.801872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bf50 is same with the state(6) to be set 00:23:20.245 [2024-12-16 01:44:29.801939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x242bf50 (9): Bad file descriptor 00:23:20.245 [2024-12-16 01:44:29.801968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:20.245 [2024-12-16 01:44:29.801983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:20.245 [2024-12-16 01:44:29.801996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:20.245 [2024-12-16 01:44:29.802019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:20.245 [2024-12-16 01:44:29.802032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:20.245 7701.19 IOPS, 30.08 MiB/s [2024-12-16T01:44:50.903Z] 7750.35 IOPS, 30.27 MiB/s [2024-12-16T01:44:50.903Z] 7806.61 IOPS, 30.49 MiB/s [2024-12-16T01:44:50.903Z] 7861.82 IOPS, 30.71 MiB/s [2024-12-16T01:44:50.903Z] 7912.27 IOPS, 30.91 MiB/s [2024-12-16T01:44:50.903Z] 7959.10 IOPS, 31.09 MiB/s [2024-12-16T01:44:50.903Z] 8004.83 IOPS, 31.27 MiB/s [2024-12-16T01:44:50.903Z] 8043.05 IOPS, 31.42 MiB/s [2024-12-16T01:44:50.903Z] 8085.34 IOPS, 31.58 MiB/s [2024-12-16T01:44:50.903Z] 8127.00 IOPS, 31.75 MiB/s [2024-12-16T01:44:50.903Z] [2024-12-16 01:44:39.860760] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:20.245 8167.85 IOPS, 31.91 MiB/s [2024-12-16T01:44:50.903Z] 8210.98 IOPS, 32.07 MiB/s [2024-12-16T01:44:50.903Z] 8250.25 IOPS, 32.23 MiB/s [2024-12-16T01:44:50.903Z] 8289.06 IOPS, 32.38 MiB/s [2024-12-16T01:44:50.903Z] 8319.04 IOPS, 32.50 MiB/s [2024-12-16T01:44:50.903Z] 8352.55 IOPS, 32.63 MiB/s [2024-12-16T01:44:50.903Z] 8385.54 IOPS, 32.76 MiB/s [2024-12-16T01:44:50.903Z] 8415.75 IOPS, 32.87 MiB/s [2024-12-16T01:44:50.903Z] 8445.06 IOPS, 32.99 MiB/s [2024-12-16T01:44:50.903Z] 8473.36 IOPS, 33.10 MiB/s [2024-12-16T01:44:50.903Z] Received shutdown signal, test time was about 55.516475 seconds 00:23:20.245 00:23:20.245 Latency(us) 00:23:20.245 [2024-12-16T01:44:50.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.245 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:20.245 Verification LBA range: start 0x0 length 0x4000 00:23:20.245 Nvme0n1 : 55.52 8482.55 33.13 0.00 0.00 15059.47 165.70 7046430.72 00:23:20.245 [2024-12-16T01:44:50.903Z] =================================================================================================================== 00:23:20.245 [2024-12-16T01:44:50.903Z] Total : 8482.55 33.13 0.00 0.00 15059.47 165.70 7046430.72 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:20.245 rmmod nvme_tcp 00:23:20.245 rmmod nvme_fabrics 00:23:20.245 rmmod nvme_keyring 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 98037 ']' 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 98037 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 98037 ']' 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 98037 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98037 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.245 killing process with pid 98037 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98037' 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 98037 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 98037 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:20.245 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:23:20.504 00:23:20.504 real 1m1.011s 00:23:20.504 user 2m49.142s 00:23:20.504 sys 0m18.079s 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.504 ************************************ 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:23:20.504 END TEST nvmf_host_multipath 00:23:20.504 ************************************ 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.504 01:44:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.504 ************************************ 00:23:20.504 START TEST nvmf_timeout 00:23:20.504 ************************************ 00:23:20.505 01:44:50 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:23:20.505 * Looking for test storage... 00:23:20.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:20.505 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:20.505 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:23:20.505 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:20.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.764 --rc genhtml_branch_coverage=1 00:23:20.764 --rc genhtml_function_coverage=1 00:23:20.764 --rc genhtml_legend=1 00:23:20.764 --rc geninfo_all_blocks=1 00:23:20.764 --rc geninfo_unexecuted_blocks=1 00:23:20.764 00:23:20.764 ' 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:20.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.764 --rc genhtml_branch_coverage=1 00:23:20.764 --rc genhtml_function_coverage=1 00:23:20.764 --rc genhtml_legend=1 00:23:20.764 --rc geninfo_all_blocks=1 00:23:20.764 --rc geninfo_unexecuted_blocks=1 00:23:20.764 00:23:20.764 ' 00:23:20.764 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:20.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.764 --rc genhtml_branch_coverage=1 00:23:20.764 --rc genhtml_function_coverage=1 00:23:20.764 --rc genhtml_legend=1 00:23:20.764 --rc geninfo_all_blocks=1 00:23:20.764 --rc geninfo_unexecuted_blocks=1 00:23:20.764 00:23:20.765 ' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:20.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.765 --rc genhtml_branch_coverage=1 00:23:20.765 --rc genhtml_function_coverage=1 00:23:20.765 --rc genhtml_legend=1 00:23:20.765 --rc geninfo_all_blocks=1 00:23:20.765 --rc geninfo_unexecuted_blocks=1 00:23:20.765 00:23:20.765 ' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.765 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:20.765 Cannot find device "nvmf_init_br" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:20.765 Cannot find device "nvmf_init_br2" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:20.765 Cannot find device "nvmf_tgt_br" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:20.765 Cannot find device "nvmf_tgt_br2" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:20.765 Cannot find device "nvmf_init_br" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:20.765 Cannot find device "nvmf_init_br2" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:20.765 Cannot find device "nvmf_tgt_br" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:20.765 Cannot find device "nvmf_tgt_br2" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:20.765 Cannot find device "nvmf_br" 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:23:20.765 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:20.766 Cannot find device "nvmf_init_if" 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:20.766 Cannot find device "nvmf_init_if2" 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:20.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:20.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:20.766 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:21.025 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:21.025 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:23:21.025 00:23:21.025 --- 10.0.0.3 ping statistics --- 00:23:21.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.025 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:21.025 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:21.025 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:23:21.025 00:23:21.025 --- 10.0.0.4 ping statistics --- 00:23:21.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.025 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:21.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:23:21.025 00:23:21.025 --- 10.0.0.1 ping statistics --- 00:23:21.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.025 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:21.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:23:21.025 00:23:21.025 --- 10.0.0.2 ping statistics --- 00:23:21.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.025 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=99243 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 99243 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99243 ']' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.025 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.025 [2024-12-16 01:44:51.648695] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:21.025 [2024-12-16 01:44:51.649296] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.285 [2024-12-16 01:44:51.799783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:21.285 [2024-12-16 01:44:51.820029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.285 [2024-12-16 01:44:51.820073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.285 [2024-12-16 01:44:51.820084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.285 [2024-12-16 01:44:51.820091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.285 [2024-12-16 01:44:51.820098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.285 [2024-12-16 01:44:51.820910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.285 [2024-12-16 01:44:51.820921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.285 [2024-12-16 01:44:51.851085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.285 01:44:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:21.853 [2024-12-16 01:44:52.228111] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.853 01:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:22.112 Malloc0 00:23:22.112 01:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:22.370 01:44:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:22.630 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:22.889 [2024-12-16 01:44:53.341988] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=99290 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 99290 /var/tmp/bdevperf.sock 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99290 ']' 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.889 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:22.889 [2024-12-16 01:44:53.420993] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:22.889 [2024-12-16 01:44:53.421121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99290 ] 00:23:23.148 [2024-12-16 01:44:53.572422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.148 [2024-12-16 01:44:53.592306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:23.148 [2024-12-16 01:44:53.621140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:23.148 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.148 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:23.148 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:23.407 01:44:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:23.666 NVMe0n1 00:23:23.666 01:44:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=99301 00:23:23.666 01:44:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:23.666 01:44:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:23:23.925 Running I/O for 10 seconds... 00:23:24.865 01:44:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:24.865 8832.00 IOPS, 34.50 MiB/s [2024-12-16T01:44:55.523Z] [2024-12-16 01:44:55.476493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.865 [2024-12-16 01:44:55.476896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.865 [2024-12-16 01:44:55.476914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.865 [2024-12-16 01:44:55.476940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.476948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.476957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.476966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.476975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.476983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.476992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.866 [2024-12-16 01:44:55.477849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.477860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.477869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.478854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.478878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.479036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.479150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.479170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.479315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.479625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.479650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.866 [2024-12-16 01:44:55.479670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.866 [2024-12-16 01:44:55.479681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.479890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.479915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.479926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.479952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.479961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.479972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.479980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.480760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.480985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.867 [2024-12-16 01:44:55.481738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.867 [2024-12-16 01:44:55.481870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.867 [2024-12-16 01:44:55.481879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1669140 is same with the state(6) to be set 00:23:24.867 [2024-12-16 01:44:55.481891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.867 [2024-12-16 01:44:55.481898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.867 [2024-12-16 01:44:55.481906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81720 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.481915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.481924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.481931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.481937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82320 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.481946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.481954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.481961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.481969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82328 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.481977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.481985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.481992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.481999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82336 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82344 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82352 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82360 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82368 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82376 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82384 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82392 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82400 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82408 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82416 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82424 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81728 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81736 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81744 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81752 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81760 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81768 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81776 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81784 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.868 [2024-12-16 01:44:55.482702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81792 len:8 PRP1 0x0 PRP2 0x0 00:23:24.868 [2024-12-16 01:44:55.482710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.868 [2024-12-16 01:44:55.482718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.868 [2024-12-16 01:44:55.482725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.869 [2024-12-16 01:44:55.482732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81800 len:8 PRP1 0x0 PRP2 0x0 00:23:24.869 [2024-12-16 01:44:55.482741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.482750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.869 [2024-12-16 01:44:55.482757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.869 [2024-12-16 01:44:55.482764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81808 len:8 PRP1 0x0 PRP2 0x0 00:23:24.869 [2024-12-16 01:44:55.482772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.482780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.869 [2024-12-16 01:44:55.482787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.869 [2024-12-16 01:44:55.482794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81816 len:8 PRP1 0x0 PRP2 0x0 00:23:24.869 [2024-12-16 01:44:55.482802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.482810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.869 [2024-12-16 01:44:55.482816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.869 [2024-12-16 01:44:55.482823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81824 len:8 PRP1 0x0 PRP2 0x0 00:23:24.869 [2024-12-16 01:44:55.482831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.482840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.869 [2024-12-16 01:44:55.482846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.869 [2024-12-16 01:44:55.482853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81832 len:8 PRP1 0x0 PRP2 0x0 00:23:24.869 [2024-12-16 01:44:55.482861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.482870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.869 [2024-12-16 01:44:55.482876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.869 [2024-12-16 01:44:55.482883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81840 len:8 PRP1 0x0 PRP2 0x0 00:23:24.869 [2024-12-16 01:44:55.482892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.482900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:24.869 [2024-12-16 01:44:55.482907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:24.869 [2024-12-16 01:44:55.482914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81848 len:8 PRP1 0x0 PRP2 0x0 00:23:24.869 [2024-12-16 01:44:55.482922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.483057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.869 [2024-12-16 01:44:55.483074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.483085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.869 [2024-12-16 01:44:55.483093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.483102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.869 [2024-12-16 01:44:55.483111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.483125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.869 [2024-12-16 01:44:55.483134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.869 [2024-12-16 01:44:55.483142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648a90 is same with the state(6) to be set 00:23:24.869 [2024-12-16 01:44:55.483344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:24.869 [2024-12-16 01:44:55.483364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1648a90 (9): Bad file descriptor 00:23:24.869 [2024-12-16 01:44:55.483456] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.869 [2024-12-16 01:44:55.483477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1648a90 with addr=10.0.0.3, port=4420 00:23:24.869 [2024-12-16 01:44:55.483487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648a90 is same with the state(6) to be set 00:23:24.869 [2024-12-16 01:44:55.483504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1648a90 (9): Bad file descriptor 00:23:24.869 [2024-12-16 01:44:55.483518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:24.869 [2024-12-16 01:44:55.483559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:24.869 [2024-12-16 01:44:55.483570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:24.869 [2024-12-16 01:44:55.483580] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:24.869 [2024-12-16 01:44:55.483590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:24.869 01:44:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:23:26.740 5088.00 IOPS, 19.88 MiB/s [2024-12-16T01:44:57.657Z] 3392.00 IOPS, 13.25 MiB/s [2024-12-16T01:44:57.658Z] [2024-12-16 01:44:57.483774] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:27.000 [2024-12-16 01:44:57.483834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1648a90 with addr=10.0.0.3, port=4420 00:23:27.000 [2024-12-16 01:44:57.483848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648a90 is same with the state(6) to be set 00:23:27.000 [2024-12-16 01:44:57.483870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1648a90 (9): Bad file descriptor 00:23:27.000 [2024-12-16 01:44:57.483887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:27.000 [2024-12-16 01:44:57.483896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:27.000 [2024-12-16 01:44:57.483906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:27.000 [2024-12-16 01:44:57.483916] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:27.000 [2024-12-16 01:44:57.483926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:27.000 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:23:27.000 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:27.000 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:27.258 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:23:27.258 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:23:27.258 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:27.258 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:27.517 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:23:27.517 01:44:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:28.751 2544.00 IOPS, 9.94 MiB/s [2024-12-16T01:44:59.690Z] 2035.20 IOPS, 7.95 MiB/s [2024-12-16T01:44:59.690Z] [2024-12-16 01:44:59.484138] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:29.032 [2024-12-16 01:44:59.484199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1648a90 with addr=10.0.0.3, port=4420 00:23:29.032 [2024-12-16 01:44:59.484213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1648a90 is same with the state(6) to be set 00:23:29.032 [2024-12-16 01:44:59.484235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1648a90 (9): Bad file descriptor 00:23:29.033 [2024-12-16 01:44:59.484252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:29.033 [2024-12-16 01:44:59.484260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:29.033 [2024-12-16 01:44:59.484270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:29.033 [2024-12-16 01:44:59.484279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:29.033 [2024-12-16 01:44:59.484289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:30.905 1696.00 IOPS, 6.62 MiB/s [2024-12-16T01:45:01.563Z] 1453.71 IOPS, 5.68 MiB/s [2024-12-16T01:45:01.563Z] [2024-12-16 01:45:01.484414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:30.905 [2024-12-16 01:45:01.484468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:30.905 [2024-12-16 01:45:01.484478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:30.905 [2024-12-16 01:45:01.484487] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:23:30.905 [2024-12-16 01:45:01.484497] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:31.843 1272.00 IOPS, 4.97 MiB/s 00:23:31.843 Latency(us) 00:23:31.843 [2024-12-16T01:45:02.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.843 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:31.843 Verification LBA range: start 0x0 length 0x4000 00:23:31.843 NVMe0n1 : 8.13 1251.47 4.89 15.74 0.00 100839.69 3187.43 7015926.69 00:23:31.843 [2024-12-16T01:45:02.501Z] =================================================================================================================== 00:23:31.843 [2024-12-16T01:45:02.501Z] Total : 1251.47 4.89 15.74 0.00 100839.69 3187.43 7015926.69 00:23:31.843 { 00:23:31.843 "results": [ 00:23:31.843 { 00:23:31.843 "job": "NVMe0n1", 00:23:31.843 "core_mask": "0x4", 00:23:31.843 "workload": "verify", 00:23:31.843 "status": "finished", 00:23:31.843 "verify_range": { 00:23:31.843 "start": 0, 00:23:31.843 "length": 16384 00:23:31.843 }, 00:23:31.843 "queue_depth": 128, 00:23:31.843 "io_size": 4096, 00:23:31.843 "runtime": 8.131252, 00:23:31.843 "iops": 1251.4677936435864, 00:23:31.843 "mibps": 4.88854606892026, 00:23:31.843 "io_failed": 128, 00:23:31.843 "io_timeout": 0, 00:23:31.843 "avg_latency_us": 100839.68505928853, 00:23:31.843 "min_latency_us": 3187.4327272727273, 00:23:31.843 "max_latency_us": 7015926.69090909 00:23:31.843 } 00:23:31.843 ], 00:23:31.843 "core_count": 1 00:23:31.843 } 00:23:32.412 01:45:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:32.412 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.412 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:32.671 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:32.671 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:32.671 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:32.672 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 99301 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 99290 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99290 ']' 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99290 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99290 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:32.931 killing process with pid 99290 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99290' 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99290 00:23:32.931 Received shutdown signal, test time was about 9.223582 seconds 00:23:32.931 00:23:32.931 Latency(us) 00:23:32.931 [2024-12-16T01:45:03.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.931 [2024-12-16T01:45:03.589Z] =================================================================================================================== 00:23:32.931 [2024-12-16T01:45:03.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.931 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99290 00:23:33.190 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:33.449 [2024-12-16 01:45:03.961734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=99418 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 99418 /var/tmp/bdevperf.sock 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99418 ']' 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.449 01:45:03 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:33.449 [2024-12-16 01:45:04.032623] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:33.449 [2024-12-16 01:45:04.032707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99418 ] 00:23:33.708 [2024-12-16 01:45:04.176794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.708 [2024-12-16 01:45:04.197700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.708 [2024-12-16 01:45:04.227676] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:33.708 01:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.708 01:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:33.708 01:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:33.967 01:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:34.225 NVMe0n1 00:23:34.225 01:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:34.225 01:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=99434 00:23:34.225 01:45:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:34.483 Running I/O for 10 seconds... 00:23:35.418 01:45:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:35.679 7957.00 IOPS, 31.08 MiB/s [2024-12-16T01:45:06.337Z] [2024-12-16 01:45:06.086165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.679 [2024-12-16 01:45:06.086346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.086988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ddea0 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.087557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.680 [2024-12-16 01:45:06.087613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.680 [2024-12-16 01:45:06.087628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.680 [2024-12-16 01:45:06.087637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.680 [2024-12-16 01:45:06.087664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.680 [2024-12-16 01:45:06.087673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.680 [2024-12-16 01:45:06.087683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.680 [2024-12-16 01:45:06.087692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.680 [2024-12-16 01:45:06.087702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:35.680 [2024-12-16 01:45:06.088583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.680 [2024-12-16 01:45:06.088628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.680 [2024-12-16 01:45:06.088650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.088983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.088993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.089842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.089857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.090977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.090989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.091001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.091010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.091020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.091029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.091039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.091048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.091058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.091067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.091077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.091085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.681 [2024-12-16 01:45:06.091193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.681 [2024-12-16 01:45:06.091207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.091899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.091923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.092353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.092488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.092505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.092774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.092790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.092801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.092813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.092822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.092834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.092843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.092854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.092863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.092874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.093000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.093154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.093293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.093362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.093374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.093385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.093393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.093404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.093413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.093423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.093432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.093755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.093921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.094025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.094045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.094064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.094083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.094377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.094643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.094669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.094808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.682 [2024-12-16 01:45:06.095716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.682 [2024-12-16 01:45:06.095728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.095737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.095749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.095758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.095769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.096801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.096815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.097110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.097394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.097756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.097794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.097815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.097835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.097855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.097864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.683 [2024-12-16 01:45:06.098739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.098990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.099869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.099889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.100000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.100015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.683 [2024-12-16 01:45:06.100028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.683 [2024-12-16 01:45:06.100036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.684 [2024-12-16 01:45:06.100287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.684 [2024-12-16 01:45:06.100298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.684 [2024-12-16 01:45:06.100311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.684 [2024-12-16 01:45:06.100320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.684 [2024-12-16 01:45:06.100331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.684 [2024-12-16 01:45:06.100339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.684 [2024-12-16 01:45:06.100569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.684 [2024-12-16 01:45:06.100590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.684 [2024-12-16 01:45:06.100604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2162140 is same with the state(6) to be set 00:23:35.684 [2024-12-16 01:45:06.100616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.684 [2024-12-16 01:45:06.100623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.684 [2024-12-16 01:45:06.100631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:23:35.684 [2024-12-16 01:45:06.100639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.684 [2024-12-16 01:45:06.100938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:35.684 [2024-12-16 01:45:06.101262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:35.684 [2024-12-16 01:45:06.101463] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:35.684 [2024-12-16 01:45:06.101611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141a90 with addr=10.0.0.3, port=4420 00:23:35.684 [2024-12-16 01:45:06.101722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:35.684 [2024-12-16 01:45:06.101752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:35.684 [2024-12-16 01:45:06.101770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:35.684 [2024-12-16 01:45:06.101894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:35.684 [2024-12-16 01:45:06.101906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:35.684 [2024-12-16 01:45:06.102014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:35.684 [2024-12-16 01:45:06.102028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:35.684 01:45:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:36.620 4434.00 IOPS, 17.32 MiB/s [2024-12-16T01:45:07.278Z] [2024-12-16 01:45:07.102270] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:36.620 [2024-12-16 01:45:07.102351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141a90 with addr=10.0.0.3, port=4420 00:23:36.620 [2024-12-16 01:45:07.102366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:36.620 [2024-12-16 01:45:07.102387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:36.620 [2024-12-16 01:45:07.102405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:36.620 [2024-12-16 01:45:07.102415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:36.620 [2024-12-16 01:45:07.102426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:36.620 [2024-12-16 01:45:07.102436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:36.620 [2024-12-16 01:45:07.102446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:36.620 01:45:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:36.878 [2024-12-16 01:45:07.356799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:36.878 01:45:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 99434 00:23:37.705 2956.00 IOPS, 11.55 MiB/s [2024-12-16T01:45:08.363Z] [2024-12-16 01:45:08.116260] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:39.577 2217.00 IOPS, 8.66 MiB/s [2024-12-16T01:45:11.172Z] 3614.80 IOPS, 14.12 MiB/s [2024-12-16T01:45:12.109Z] 4799.00 IOPS, 18.75 MiB/s [2024-12-16T01:45:13.045Z] 5656.29 IOPS, 22.09 MiB/s [2024-12-16T01:45:13.981Z] 6303.25 IOPS, 24.62 MiB/s [2024-12-16T01:45:15.357Z] 6805.56 IOPS, 26.58 MiB/s [2024-12-16T01:45:15.357Z] 7213.00 IOPS, 28.18 MiB/s 00:23:44.699 Latency(us) 00:23:44.699 [2024-12-16T01:45:15.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.699 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:44.699 Verification LBA range: start 0x0 length 0x4000 00:23:44.699 NVMe0n1 : 10.01 7219.93 28.20 0.00 0.00 17689.05 1288.38 3035150.89 00:23:44.699 [2024-12-16T01:45:15.357Z] =================================================================================================================== 00:23:44.699 [2024-12-16T01:45:15.357Z] Total : 7219.93 28.20 0.00 0.00 17689.05 1288.38 3035150.89 00:23:44.699 { 00:23:44.699 "results": [ 00:23:44.699 { 00:23:44.699 "job": "NVMe0n1", 00:23:44.699 "core_mask": "0x4", 00:23:44.699 "workload": "verify", 00:23:44.699 "status": "finished", 00:23:44.699 "verify_range": { 00:23:44.699 "start": 0, 00:23:44.699 "length": 16384 00:23:44.699 }, 00:23:44.699 "queue_depth": 128, 00:23:44.699 "io_size": 4096, 00:23:44.699 "runtime": 10.008128, 00:23:44.699 "iops": 7219.931639563363, 00:23:44.699 "mibps": 28.202857967044388, 00:23:44.699 "io_failed": 0, 00:23:44.699 "io_timeout": 0, 00:23:44.699 "avg_latency_us": 17689.050702055007, 00:23:44.699 "min_latency_us": 1288.378181818182, 00:23:44.699 "max_latency_us": 3035150.8945454545 00:23:44.699 } 00:23:44.699 ], 00:23:44.699 "core_count": 1 00:23:44.699 } 00:23:44.699 01:45:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=99539 00:23:44.699 01:45:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:44.699 01:45:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:44.699 Running I/O for 10 seconds... 00:23:45.636 01:45:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:45.636 8084.00 IOPS, 31.58 MiB/s [2024-12-16T01:45:16.294Z] [2024-12-16 01:45:16.203044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-12-16 01:45:16.203106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.636 [2024-12-16 01:45:16.203142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-12-16 01:45:16.203152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.636 [2024-12-16 01:45:16.203162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-12-16 01:45:16.203170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.636 [2024-12-16 01:45:16.203180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-12-16 01:45:16.203188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.636 [2024-12-16 01:45:16.203198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-12-16 01:45:16.203206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.636 [2024-12-16 01:45:16.203216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-12-16 01:45:16.203223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.636 [2024-12-16 01:45:16.203233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-12-16 01:45:16.203241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.636 [2024-12-16 01:45:16.203250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-12-16 01:45:16.203275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.203892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.203916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.204469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.204478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.205990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.205999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.206010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.206018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.206302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.206328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.206343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.206353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.206364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.206373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.206384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.206393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.637 [2024-12-16 01:45:16.206404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-12-16 01:45:16.206413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.206423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.206432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.206443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.206452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.206890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.206914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.206941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.206951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.206963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.206973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.206983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.206992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.207848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.207997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.208705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.208833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.209730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.209746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.210012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.210032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.210045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.638 [2024-12-16 01:45:16.210069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-12-16 01:45:16.210080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.210809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.210817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.211769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.212166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.212191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.212201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.212212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.639 [2024-12-16 01:45:16.212220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.212231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215fbc0 is same with the state(6) to be set 00:23:45.639 [2024-12-16 01:45:16.212244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.212251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.212262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70800 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.212270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.212280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.212287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70824 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.212551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.212723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.212889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.212989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70832 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.213000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.213011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.213018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.213025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70840 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.213034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.213042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.213049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.213056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70848 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.213064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.213073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.213079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.213087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70856 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.213429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.213442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.213449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.213457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70864 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.213466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.213475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.213482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.213489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70872 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.213498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.213506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.213513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.213520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70880 len:8 PRP1 0x0 PRP2 0x0 00:23:45.639 [2024-12-16 01:45:16.213659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.639 [2024-12-16 01:45:16.213673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.639 [2024-12-16 01:45:16.213680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.639 [2024-12-16 01:45:16.213799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70888 len:8 PRP1 0x0 PRP2 0x0 00:23:45.640 [2024-12-16 01:45:16.213812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.213822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.640 [2024-12-16 01:45:16.213938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.640 [2024-12-16 01:45:16.213953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70896 len:8 PRP1 0x0 PRP2 0x0 00:23:45.640 [2024-12-16 01:45:16.213962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.214087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.640 [2024-12-16 01:45:16.214107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.640 [2024-12-16 01:45:16.214353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70904 len:8 PRP1 0x0 PRP2 0x0 00:23:45.640 [2024-12-16 01:45:16.214376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.214389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.640 [2024-12-16 01:45:16.214397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.640 [2024-12-16 01:45:16.214405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70912 len:8 PRP1 0x0 PRP2 0x0 00:23:45.640 [2024-12-16 01:45:16.214415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.214424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.640 [2024-12-16 01:45:16.214431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.640 [2024-12-16 01:45:16.214439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70920 len:8 PRP1 0x0 PRP2 0x0 00:23:45.640 [2024-12-16 01:45:16.214448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.214457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.640 [2024-12-16 01:45:16.214464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.640 [2024-12-16 01:45:16.214487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70928 len:8 PRP1 0x0 PRP2 0x0 00:23:45.640 [2024-12-16 01:45:16.214748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.215017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:45.640 [2024-12-16 01:45:16.215035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:45.640 [2024-12-16 01:45:16.215148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70936 len:8 PRP1 0x0 PRP2 0x0 00:23:45.640 [2024-12-16 01:45:16.215167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.215514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.640 [2024-12-16 01:45:16.215569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.215599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.640 [2024-12-16 01:45:16.215608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.215619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.640 [2024-12-16 01:45:16.215628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.215638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:45.640 [2024-12-16 01:45:16.215842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.640 [2024-12-16 01:45:16.215867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:45.640 [2024-12-16 01:45:16.216272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:45.640 [2024-12-16 01:45:16.216307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:45.640 [2024-12-16 01:45:16.216614] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:45.640 [2024-12-16 01:45:16.216646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141a90 with addr=10.0.0.3, port=4420 00:23:45.640 [2024-12-16 01:45:16.216658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:45.640 [2024-12-16 01:45:16.216678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:45.640 [2024-12-16 01:45:16.216694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:45.640 [2024-12-16 01:45:16.216703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:45.640 [2024-12-16 01:45:16.216925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:45.640 [2024-12-16 01:45:16.216950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:45.640 [2024-12-16 01:45:16.216962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:45.640 01:45:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:46.576 4370.00 IOPS, 17.07 MiB/s [2024-12-16T01:45:17.234Z] [2024-12-16 01:45:17.217068] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:46.576 [2024-12-16 01:45:17.217147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141a90 with addr=10.0.0.3, port=4420 00:23:46.576 [2024-12-16 01:45:17.217162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:46.576 [2024-12-16 01:45:17.217183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:46.576 [2024-12-16 01:45:17.217199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:46.576 [2024-12-16 01:45:17.217208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:46.576 [2024-12-16 01:45:17.217219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:46.576 [2024-12-16 01:45:17.217229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:46.576 [2024-12-16 01:45:17.217239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:47.769 2913.33 IOPS, 11.38 MiB/s [2024-12-16T01:45:18.427Z] [2024-12-16 01:45:18.217330] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.769 [2024-12-16 01:45:18.217407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141a90 with addr=10.0.0.3, port=4420 00:23:47.769 [2024-12-16 01:45:18.217421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:47.769 [2024-12-16 01:45:18.217440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:47.769 [2024-12-16 01:45:18.217457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:47.769 [2024-12-16 01:45:18.217465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:47.769 [2024-12-16 01:45:18.217475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:47.769 [2024-12-16 01:45:18.217484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:47.770 [2024-12-16 01:45:18.217493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:48.706 2185.00 IOPS, 8.54 MiB/s [2024-12-16T01:45:19.364Z] [2024-12-16 01:45:19.220267] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.706 [2024-12-16 01:45:19.220345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141a90 with addr=10.0.0.3, port=4420 00:23:48.706 [2024-12-16 01:45:19.220360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141a90 is same with the state(6) to be set 00:23:48.706 [2024-12-16 01:45:19.220843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141a90 (9): Bad file descriptor 00:23:48.706 [2024-12-16 01:45:19.221323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:48.706 [2024-12-16 01:45:19.221353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:48.706 [2024-12-16 01:45:19.221366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:48.706 [2024-12-16 01:45:19.221377] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:48.707 [2024-12-16 01:45:19.221388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:48.707 01:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:48.966 [2024-12-16 01:45:19.445152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:48.966 01:45:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 99539 00:23:49.792 1748.00 IOPS, 6.83 MiB/s [2024-12-16T01:45:20.450Z] [2024-12-16 01:45:20.244654] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:51.666 3021.83 IOPS, 11.80 MiB/s [2024-12-16T01:45:23.261Z] 4147.86 IOPS, 16.20 MiB/s [2024-12-16T01:45:24.242Z] 4992.38 IOPS, 19.50 MiB/s [2024-12-16T01:45:25.178Z] 5636.78 IOPS, 22.02 MiB/s [2024-12-16T01:45:25.178Z] 6161.90 IOPS, 24.07 MiB/s 00:23:54.520 Latency(us) 00:23:54.520 [2024-12-16T01:45:25.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.520 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:54.520 Verification LBA range: start 0x0 length 0x4000 00:23:54.520 NVMe0n1 : 10.01 6170.09 24.10 4107.93 0.00 12429.60 793.13 3035150.89 00:23:54.520 [2024-12-16T01:45:25.178Z] =================================================================================================================== 00:23:54.520 [2024-12-16T01:45:25.178Z] Total : 6170.09 24.10 4107.93 0.00 12429.60 0.00 3035150.89 00:23:54.520 { 00:23:54.520 "results": [ 00:23:54.520 { 00:23:54.520 "job": "NVMe0n1", 00:23:54.520 "core_mask": "0x4", 00:23:54.520 "workload": "verify", 00:23:54.520 "status": "finished", 00:23:54.520 "verify_range": { 00:23:54.520 "start": 0, 00:23:54.520 "length": 16384 00:23:54.520 }, 00:23:54.520 "queue_depth": 128, 00:23:54.520 "io_size": 4096, 00:23:54.520 "runtime": 10.007476, 00:23:54.520 "iops": 6170.087242777299, 00:23:54.520 "mibps": 24.101903292098825, 00:23:54.520 "io_failed": 41110, 00:23:54.520 "io_timeout": 0, 00:23:54.520 "avg_latency_us": 12429.601357719057, 00:23:54.520 "min_latency_us": 793.1345454545454, 00:23:54.520 "max_latency_us": 3035150.8945454545 00:23:54.520 } 00:23:54.520 ], 00:23:54.520 "core_count": 1 00:23:54.520 } 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 99418 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99418 ']' 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99418 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99418 00:23:54.520 killing process with pid 99418 00:23:54.520 Received shutdown signal, test time was about 10.000000 seconds 00:23:54.520 00:23:54.520 Latency(us) 00:23:54.520 [2024-12-16T01:45:25.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.520 [2024-12-16T01:45:25.178Z] =================================================================================================================== 00:23:54.520 [2024-12-16T01:45:25.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99418' 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99418 00:23:54.520 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99418 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=99653 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 99653 /var/tmp/bdevperf.sock 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 99653 ']' 00:23:54.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.778 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:54.778 [2024-12-16 01:45:25.340431] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:54.778 [2024-12-16 01:45:25.340534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99653 ] 00:23:55.036 [2024-12-16 01:45:25.479922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.036 [2024-12-16 01:45:25.502960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.036 [2024-12-16 01:45:25.532829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:55.036 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.037 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:55.037 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 99653 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:55.037 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=99662 00:23:55.037 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:55.295 01:45:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:55.553 NVMe0n1 00:23:55.553 01:45:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=99703 00:23:55.553 01:45:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.553 01:45:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:55.812 Running I/O for 10 seconds... 00:23:56.748 01:45:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:57.010 17145.00 IOPS, 66.97 MiB/s [2024-12-16T01:45:27.668Z] [2024-12-16 01:45:27.465461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.010 [2024-12-16 01:45:27.465911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.465997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.466005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.466013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.466020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.466028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5db420 is same with the state(6) to be set 00:23:57.011 [2024-12-16 01:45:27.466583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.466625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.466648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.466659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.466671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.466680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.466706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.466715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.466726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.466735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.466746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.466754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.466765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.467876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.467994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.468920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.011 [2024-12-16 01:45:27.468932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.011 [2024-12-16 01:45:27.469284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.469860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.469874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.470935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.470946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.471277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.471306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.471326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.471345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.471366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:57.012 [2024-12-16 01:45:27.471509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451140 is same with the state(6) to be set 00:23:57.012 [2024-12-16 01:45:27.471796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.471806] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.471815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21560 len:8 PRP1 0x0 PRP2 0x0 00:23:57.012 [2024-12-16 01:45:27.471826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.471844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.471853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65816 len:8 PRP1 0x0 PRP2 0x0 00:23:57.012 [2024-12-16 01:45:27.471862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.471872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.471895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.471903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49472 len:8 PRP1 0x0 PRP2 0x0 00:23:57.012 [2024-12-16 01:45:27.471912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.472190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.472262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.472273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:23:57.012 [2024-12-16 01:45:27.472282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.472294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.472301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.472309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66592 len:8 PRP1 0x0 PRP2 0x0 00:23:57.012 [2024-12-16 01:45:27.472318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.472327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.472334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.472342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111848 len:8 PRP1 0x0 PRP2 0x0 00:23:57.012 [2024-12-16 01:45:27.472350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.472359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.472366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.472373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65528 len:8 PRP1 0x0 PRP2 0x0 00:23:57.012 [2024-12-16 01:45:27.472382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.012 [2024-12-16 01:45:27.472807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.012 [2024-12-16 01:45:27.472897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.012 [2024-12-16 01:45:27.472907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15960 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.472916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.472928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.472935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.472943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75368 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.472952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.472962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.472969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.472977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38136 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.473001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.473011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.473018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.473152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28168 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.473350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.473364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.473372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.473379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14008 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.473389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.473398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.473405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.473412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73936 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.473421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.473430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.473553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.473583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.473726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.473830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.473840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.473849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90840 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.473859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.473869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.473892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.473900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122048 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.474031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.474171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.474205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.474437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69768 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.474449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.474461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.474469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.474489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71072 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.474498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.474508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.474515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.474523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128352 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.474546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.474840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.474851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.474860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126920 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.475098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.475113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.475120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.475128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39480 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.475137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.475147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.475153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.475161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116960 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.475170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.475179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.475186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.475193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27664 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.475334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.475467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.475477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.475604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41640 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.475620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.475631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.475836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.475850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14888 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.475859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.475870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.475878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.475886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60848 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.475910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.476032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.476045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.476054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11480 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.476178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.476202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.476326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.476347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68520 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.476478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.476732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.476761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.476770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119800 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.476784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.476796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.476804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.476812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117272 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.476821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.476830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.476837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.476845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19784 len:8 PRP1 0x0 PRP2 0x0 00:23:57.013 [2024-12-16 01:45:27.476854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.013 [2024-12-16 01:45:27.476864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.013 [2024-12-16 01:45:27.476871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.013 [2024-12-16 01:45:27.477011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79024 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.477257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.477270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.477278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.477286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99048 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.477295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.477304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.477311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.477318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32712 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.477463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.477617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.477735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.477745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98864 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.477755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.478014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.478030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.478291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.478311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.478436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.478446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.478674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76208 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.478695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.478707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.478714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.478722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15280 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.478731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.478740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.478747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.478754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94224 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.478870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.478883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.478891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.479008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126832 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.479028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.479154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.479165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.479261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82544 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.479281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.479292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.479299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.479307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34440 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.479437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.479714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.479736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.479745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47848 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.479755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.479764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.479771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.479779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91744 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.479788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.479899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.479918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.479927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110320 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.480183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.480206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.480330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.480343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54336 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.480463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.480480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.480707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.480729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.480740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.480751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.480758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.480767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92056 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.480782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.480791] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.480902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.480915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10328 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.480924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.480934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.481066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.481194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78984 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.481330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.481356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.481497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.481592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119976 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.481602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.014 [2024-12-16 01:45:27.481613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.014 [2024-12-16 01:45:27.481621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.014 [2024-12-16 01:45:27.481629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126576 len:8 PRP1 0x0 PRP2 0x0 00:23:57.014 [2024-12-16 01:45:27.481638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.481647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.481912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.482046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69488 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.482068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.482212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.482339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.482351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49768 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.482629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.482652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.482660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.482668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121088 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.482678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.482688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.482695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.482703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125376 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.482712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.482721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.482728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.482984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31816 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.482996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87160 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:117720 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36528 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10848 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 01:45:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 99703 00:23:57.015 [2024-12-16 01:45:27.483651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6792 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56792 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92400 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.483738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.483747] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:57.015 [2024-12-16 01:45:27.483754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:57.015 [2024-12-16 01:45:27.483761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39384 len:8 PRP1 0x0 PRP2 0x0 00:23:57.015 [2024-12-16 01:45:27.484006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.484280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.015 [2024-12-16 01:45:27.484303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.484315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.015 [2024-12-16 01:45:27.484323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.484333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.015 [2024-12-16 01:45:27.484341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.484350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.015 [2024-12-16 01:45:27.484359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.015 [2024-12-16 01:45:27.484504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430a90 is same with the state(6) to be set 00:23:57.015 [2024-12-16 01:45:27.485205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:57.015 [2024-12-16 01:45:27.485255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430a90 (9): Bad file descriptor 00:23:57.015 [2024-12-16 01:45:27.485355] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.015 [2024-12-16 01:45:27.485625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430a90 with addr=10.0.0.3, port=4420 00:23:57.015 [2024-12-16 01:45:27.485651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430a90 is same with the state(6) to be set 00:23:57.015 [2024-12-16 01:45:27.485673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430a90 (9): Bad file descriptor 00:23:57.015 [2024-12-16 01:45:27.485690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:57.015 [2024-12-16 01:45:27.485700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:57.015 [2024-12-16 01:45:27.485711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:57.015 [2024-12-16 01:45:27.485721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:57.015 [2024-12-16 01:45:27.485860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:58.886 10002.50 IOPS, 39.07 MiB/s [2024-12-16T01:45:29.544Z] 6668.33 IOPS, 26.05 MiB/s [2024-12-16T01:45:29.544Z] [2024-12-16 01:45:29.486113] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.886 [2024-12-16 01:45:29.486215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430a90 with addr=10.0.0.3, port=4420 00:23:58.886 [2024-12-16 01:45:29.486231] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430a90 is same with the state(6) to be set 00:23:58.886 [2024-12-16 01:45:29.486254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430a90 (9): Bad file descriptor 00:23:58.886 [2024-12-16 01:45:29.486274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:58.886 [2024-12-16 01:45:29.486284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:58.887 [2024-12-16 01:45:29.486304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:58.887 [2024-12-16 01:45:29.486314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:58.887 [2024-12-16 01:45:29.486325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:00.758 5001.25 IOPS, 19.54 MiB/s [2024-12-16T01:45:31.675Z] 4001.00 IOPS, 15.63 MiB/s [2024-12-16T01:45:31.675Z] [2024-12-16 01:45:31.486465] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.017 [2024-12-16 01:45:31.486582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1430a90 with addr=10.0.0.3, port=4420 00:24:01.017 [2024-12-16 01:45:31.486599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430a90 is same with the state(6) to be set 00:24:01.017 [2024-12-16 01:45:31.486621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1430a90 (9): Bad file descriptor 00:24:01.017 [2024-12-16 01:45:31.486640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:01.017 [2024-12-16 01:45:31.486649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:01.017 [2024-12-16 01:45:31.486659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:01.017 [2024-12-16 01:45:31.486669] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:01.017 [2024-12-16 01:45:31.486680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:02.889 3334.17 IOPS, 13.02 MiB/s [2024-12-16T01:45:33.547Z] 2857.86 IOPS, 11.16 MiB/s [2024-12-16T01:45:33.547Z] [2024-12-16 01:45:33.486744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:02.889 [2024-12-16 01:45:33.486799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:24:02.889 [2024-12-16 01:45:33.486811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:24:02.889 [2024-12-16 01:45:33.486821] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:24:02.889 [2024-12-16 01:45:33.486831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:24:04.084 2500.62 IOPS, 9.77 MiB/s 00:24:04.084 Latency(us) 00:24:04.084 [2024-12-16T01:45:34.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.084 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:04.084 NVMe0n1 : 8.19 2441.33 9.54 15.62 0.00 52133.74 7000.44 7046430.72 00:24:04.084 [2024-12-16T01:45:34.742Z] =================================================================================================================== 00:24:04.084 [2024-12-16T01:45:34.742Z] Total : 2441.33 9.54 15.62 0.00 52133.74 7000.44 7046430.72 00:24:04.084 { 00:24:04.084 "results": [ 00:24:04.084 { 00:24:04.084 "job": "NVMe0n1", 00:24:04.085 "core_mask": "0x4", 00:24:04.085 "workload": "randread", 00:24:04.085 "status": "finished", 00:24:04.085 "queue_depth": 128, 00:24:04.085 "io_size": 4096, 00:24:04.085 "runtime": 8.194306, 00:24:04.085 "iops": 2441.329381646231, 00:24:04.085 "mibps": 9.53644289705559, 00:24:04.085 "io_failed": 128, 00:24:04.085 "io_timeout": 0, 00:24:04.085 "avg_latency_us": 52133.74403345028, 00:24:04.085 "min_latency_us": 7000.436363636363, 00:24:04.085 "max_latency_us": 7046430.72 00:24:04.085 } 00:24:04.085 ], 00:24:04.085 "core_count": 1 00:24:04.085 } 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.085 Attaching 5 probes... 00:24:04.085 1400.487052: reset bdev controller NVMe0 00:24:04.085 1400.579341: reconnect bdev controller NVMe0 00:24:04.085 3401.290727: reconnect delay bdev controller NVMe0 00:24:04.085 3401.324645: reconnect bdev controller NVMe0 00:24:04.085 5401.643055: reconnect delay bdev controller NVMe0 00:24:04.085 5401.664989: reconnect bdev controller NVMe0 00:24:04.085 7401.994237: reconnect delay bdev controller NVMe0 00:24:04.085 7402.028307: reconnect bdev controller NVMe0 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 99662 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 99653 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99653 ']' 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99653 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99653 00:24:04.085 killing process with pid 99653 00:24:04.085 Received shutdown signal, test time was about 8.260604 seconds 00:24:04.085 00:24:04.085 Latency(us) 00:24:04.085 [2024-12-16T01:45:34.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.085 [2024-12-16T01:45:34.743Z] =================================================================================================================== 00:24:04.085 [2024-12-16T01:45:34.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99653' 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99653 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99653 00:24:04.085 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.344 01:45:34 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.344 rmmod nvme_tcp 00:24:04.344 rmmod nvme_fabrics 00:24:04.344 rmmod nvme_keyring 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 99243 ']' 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 99243 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 99243 ']' 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 99243 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99243 00:24:04.601 killing process with pid 99243 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99243' 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 99243 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 99243 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.601 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:04.602 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:24:04.860 00:24:04.860 real 0m44.426s 00:24:04.860 user 2m9.612s 00:24:04.860 sys 0m5.637s 00:24:04.860 ************************************ 00:24:04.860 END TEST nvmf_timeout 00:24:04.860 ************************************ 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:04.860 ************************************ 00:24:04.860 END TEST nvmf_host 00:24:04.860 ************************************ 00:24:04.860 00:24:04.860 real 5m39.329s 00:24:04.860 user 15m55.605s 00:24:04.860 sys 1m16.380s 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.860 01:45:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.860 01:45:35 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:24:04.860 01:45:35 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:24:04.860 ************************************ 00:24:04.860 END TEST nvmf_tcp 00:24:04.860 ************************************ 00:24:04.860 00:24:04.860 real 14m59.750s 00:24:04.860 user 39m27.394s 00:24:04.860 sys 4m3.660s 00:24:04.860 01:45:35 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.860 01:45:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.120 01:45:35 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:24:05.120 01:45:35 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:05.120 01:45:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.120 01:45:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.120 01:45:35 -- common/autotest_common.sh@10 -- # set +x 00:24:05.120 ************************************ 00:24:05.120 START TEST nvmf_dif 00:24:05.120 ************************************ 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:24:05.120 * Looking for test storage... 00:24:05.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:05.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.120 --rc genhtml_branch_coverage=1 00:24:05.120 --rc genhtml_function_coverage=1 00:24:05.120 --rc genhtml_legend=1 00:24:05.120 --rc geninfo_all_blocks=1 00:24:05.120 --rc geninfo_unexecuted_blocks=1 00:24:05.120 00:24:05.120 ' 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:05.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.120 --rc genhtml_branch_coverage=1 00:24:05.120 --rc genhtml_function_coverage=1 00:24:05.120 --rc genhtml_legend=1 00:24:05.120 --rc geninfo_all_blocks=1 00:24:05.120 --rc geninfo_unexecuted_blocks=1 00:24:05.120 00:24:05.120 ' 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:05.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.120 --rc genhtml_branch_coverage=1 00:24:05.120 --rc genhtml_function_coverage=1 00:24:05.120 --rc genhtml_legend=1 00:24:05.120 --rc geninfo_all_blocks=1 00:24:05.120 --rc geninfo_unexecuted_blocks=1 00:24:05.120 00:24:05.120 ' 00:24:05.120 01:45:35 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:05.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.120 --rc genhtml_branch_coverage=1 00:24:05.120 --rc genhtml_function_coverage=1 00:24:05.120 --rc genhtml_legend=1 00:24:05.120 --rc geninfo_all_blocks=1 00:24:05.120 --rc geninfo_unexecuted_blocks=1 00:24:05.120 00:24:05.120 ' 00:24:05.120 01:45:35 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:febd874a-f7ac-4dde-b5e1-60c80814d053 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=febd874a-f7ac-4dde-b5e1-60c80814d053 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.120 01:45:35 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.120 01:45:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.120 01:45:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.120 01:45:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.120 01:45:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:24:05.120 01:45:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:05.120 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.120 01:45:35 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.120 01:45:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:24:05.380 01:45:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:24:05.380 01:45:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:24:05.380 01:45:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:24:05.380 01:45:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.380 01:45:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:05.380 01:45:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:05.380 Cannot find device "nvmf_init_br" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@162 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:05.380 Cannot find device "nvmf_init_br2" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@163 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:05.380 Cannot find device "nvmf_tgt_br" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@164 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.380 Cannot find device "nvmf_tgt_br2" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@165 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.380 Cannot find device "nvmf_init_br" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@166 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:05.380 Cannot find device "nvmf_init_br2" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@167 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:05.380 Cannot find device "nvmf_tgt_br" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@168 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:05.380 Cannot find device "nvmf_tgt_br2" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@169 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:05.380 Cannot find device "nvmf_br" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@170 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:05.380 Cannot find device "nvmf_init_if" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@171 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:05.380 Cannot find device "nvmf_init_if2" 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@172 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@173 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.380 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@174 -- # true 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.380 01:45:35 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.380 01:45:36 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.639 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.639 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:24:05.639 00:24:05.639 --- 10.0.0.3 ping statistics --- 00:24:05.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.639 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.639 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.639 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:24:05.639 00:24:05.639 --- 10.0.0.4 ping statistics --- 00:24:05.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.639 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:05.639 00:24:05.639 --- 10.0.0.1 ping statistics --- 00:24:05.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.639 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:24:05.639 00:24:05.639 --- 10.0.0.2 ping statistics --- 00:24:05.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.639 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:05.639 01:45:36 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:05.898 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:05.898 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:05.898 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.157 01:45:36 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:24:06.157 01:45:36 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=100184 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:06.157 01:45:36 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 100184 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 100184 ']' 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.157 01:45:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.157 [2024-12-16 01:45:36.668299] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:06.157 [2024-12-16 01:45:36.668396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.417 [2024-12-16 01:45:36.825037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.417 [2024-12-16 01:45:36.849827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.417 [2024-12-16 01:45:36.849891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.417 [2024-12-16 01:45:36.849906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.417 [2024-12-16 01:45:36.849916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.417 [2024-12-16 01:45:36.849926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.417 [2024-12-16 01:45:36.850308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.417 [2024-12-16 01:45:36.887960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:24:06.417 01:45:36 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 01:45:36 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.417 01:45:36 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:24:06.417 01:45:36 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 [2024-12-16 01:45:36.990227] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.417 01:45:36 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.417 01:45:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 ************************************ 00:24:06.417 START TEST fio_dif_1_default 00:24:06.417 ************************************ 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 bdev_null0 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:06.417 [2024-12-16 01:45:37.038400] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:06.417 { 00:24:06.417 "params": { 00:24:06.417 "name": "Nvme$subsystem", 00:24:06.417 "trtype": "$TEST_TRANSPORT", 00:24:06.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.417 "adrfam": "ipv4", 00:24:06.417 "trsvcid": "$NVMF_PORT", 00:24:06.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.417 "hdgst": ${hdgst:-false}, 00:24:06.417 "ddgst": ${ddgst:-false} 00:24:06.417 }, 00:24:06.417 "method": "bdev_nvme_attach_controller" 00:24:06.417 } 00:24:06.417 EOF 00:24:06.417 )") 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:24:06.417 01:45:37 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:06.417 "params": { 00:24:06.417 "name": "Nvme0", 00:24:06.417 "trtype": "tcp", 00:24:06.417 "traddr": "10.0.0.3", 00:24:06.417 "adrfam": "ipv4", 00:24:06.417 "trsvcid": "4420", 00:24:06.417 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.417 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:06.417 "hdgst": false, 00:24:06.417 "ddgst": false 00:24:06.417 }, 00:24:06.417 "method": "bdev_nvme_attach_controller" 00:24:06.417 }' 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:06.676 01:45:37 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.676 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:06.676 fio-3.35 00:24:06.676 Starting 1 thread 00:24:18.899 00:24:18.899 filename0: (groupid=0, jobs=1): err= 0: pid=100243: Mon Dec 16 01:45:47 2024 00:24:18.899 read: IOPS=9825, BW=38.4MiB/s (40.2MB/s)(384MiB/10001msec) 00:24:18.899 slat (nsec): min=5997, max=57899, avg=7709.15, stdev=3276.66 00:24:18.899 clat (usec): min=322, max=3562, avg=384.22, stdev=43.99 00:24:18.899 lat (usec): min=328, max=3589, avg=391.93, stdev=44.79 00:24:18.899 clat percentiles (usec): 00:24:18.899 | 1.00th=[ 326], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:24:18.899 | 30.00th=[ 363], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 388], 00:24:18.899 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[ 433], 95.00th=[ 457], 00:24:18.899 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 594], 00:24:18.899 | 99.99th=[ 717] 00:24:18.899 bw ( KiB/s): min=37237, max=40448, per=99.96%, avg=39283.63, stdev=840.11, samples=19 00:24:18.899 iops : min= 9309, max=10112, avg=9820.89, stdev=210.06, samples=19 00:24:18.899 lat (usec) : 500=98.77%, 750=1.22% 00:24:18.899 lat (msec) : 2=0.01%, 4=0.01% 00:24:18.899 cpu : usr=84.41%, sys=13.70%, ctx=20, majf=0, minf=0 00:24:18.899 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:18.899 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.899 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.899 issued rwts: total=98260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.899 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:18.899 00:24:18.899 Run status group 0 (all jobs): 00:24:18.899 READ: bw=38.4MiB/s (40.2MB/s), 38.4MiB/s-38.4MiB/s (40.2MB/s-40.2MB/s), io=384MiB (402MB), run=10001-10001msec 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 ************************************ 00:24:18.899 END TEST fio_dif_1_default 00:24:18.899 ************************************ 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 00:24:18.899 real 0m10.894s 00:24:18.899 user 0m8.998s 00:24:18.899 sys 0m1.607s 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 01:45:47 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:24:18.899 01:45:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:18.899 01:45:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 ************************************ 00:24:18.899 START TEST fio_dif_1_multi_subsystems 00:24:18.899 ************************************ 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 bdev_null0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 [2024-12-16 01:45:47.984335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 bdev_null1 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:18.899 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.900 { 00:24:18.900 "params": { 00:24:18.900 "name": "Nvme$subsystem", 00:24:18.900 "trtype": "$TEST_TRANSPORT", 00:24:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.900 "adrfam": "ipv4", 00:24:18.900 "trsvcid": "$NVMF_PORT", 00:24:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.900 "hdgst": ${hdgst:-false}, 00:24:18.900 "ddgst": ${ddgst:-false} 00:24:18.900 }, 00:24:18.900 "method": "bdev_nvme_attach_controller" 00:24:18.900 } 00:24:18.900 EOF 00:24:18.900 )") 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:18.900 { 00:24:18.900 "params": { 00:24:18.900 "name": "Nvme$subsystem", 00:24:18.900 "trtype": "$TEST_TRANSPORT", 00:24:18.900 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.900 "adrfam": "ipv4", 00:24:18.900 "trsvcid": "$NVMF_PORT", 00:24:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.900 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.900 "hdgst": ${hdgst:-false}, 00:24:18.900 "ddgst": ${ddgst:-false} 00:24:18.900 }, 00:24:18.900 "method": "bdev_nvme_attach_controller" 00:24:18.900 } 00:24:18.900 EOF 00:24:18.900 )") 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:18.900 "params": { 00:24:18.900 "name": "Nvme0", 00:24:18.900 "trtype": "tcp", 00:24:18.900 "traddr": "10.0.0.3", 00:24:18.900 "adrfam": "ipv4", 00:24:18.900 "trsvcid": "4420", 00:24:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:18.900 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:18.900 "hdgst": false, 00:24:18.900 "ddgst": false 00:24:18.900 }, 00:24:18.900 "method": "bdev_nvme_attach_controller" 00:24:18.900 },{ 00:24:18.900 "params": { 00:24:18.900 "name": "Nvme1", 00:24:18.900 "trtype": "tcp", 00:24:18.900 "traddr": "10.0.0.3", 00:24:18.900 "adrfam": "ipv4", 00:24:18.900 "trsvcid": "4420", 00:24:18.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.900 "hdgst": false, 00:24:18.900 "ddgst": false 00:24:18.900 }, 00:24:18.900 "method": "bdev_nvme_attach_controller" 00:24:18.900 }' 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:18.900 01:45:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:18.900 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:18.900 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:24:18.900 fio-3.35 00:24:18.900 Starting 2 threads 00:24:28.907 00:24:28.907 filename0: (groupid=0, jobs=1): err= 0: pid=100403: Mon Dec 16 01:45:58 2024 00:24:28.907 read: IOPS=5238, BW=20.5MiB/s (21.5MB/s)(205MiB/10001msec) 00:24:28.907 slat (nsec): min=6425, max=77699, avg=12640.27, stdev=4693.20 00:24:28.907 clat (usec): min=549, max=2974, avg=729.40, stdev=65.91 00:24:28.907 lat (usec): min=557, max=3000, avg=742.04, stdev=66.96 00:24:28.907 clat percentiles (usec): 00:24:28.907 | 1.00th=[ 611], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 685], 00:24:28.907 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 734], 00:24:28.907 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 848], 00:24:28.907 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1020], 99.95th=[ 1057], 00:24:28.907 | 99.99th=[ 1631] 00:24:28.907 bw ( KiB/s): min=20416, max=21472, per=49.98%, avg=20948.21, stdev=302.49, samples=19 00:24:28.907 iops : min= 5104, max= 5368, avg=5237.05, stdev=75.62, samples=19 00:24:28.907 lat (usec) : 750=69.21%, 1000=30.63% 00:24:28.907 lat (msec) : 2=0.15%, 4=0.01% 00:24:28.907 cpu : usr=89.67%, sys=9.04%, ctx=24, majf=0, minf=0 00:24:28.908 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.908 issued rwts: total=52392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.908 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:28.908 filename1: (groupid=0, jobs=1): err= 0: pid=100404: Mon Dec 16 01:45:58 2024 00:24:28.908 read: IOPS=5239, BW=20.5MiB/s (21.5MB/s)(205MiB/10001msec) 00:24:28.908 slat (nsec): min=6357, max=70799, avg=12668.49, stdev=4720.83 00:24:28.908 clat (usec): min=433, max=2721, avg=728.46, stdev=59.52 00:24:28.908 lat (usec): min=440, max=2790, avg=741.13, stdev=60.19 00:24:28.908 clat percentiles (usec): 00:24:28.908 | 1.00th=[ 644], 5.00th=[ 660], 10.00th=[ 668], 20.00th=[ 685], 00:24:28.908 | 30.00th=[ 693], 40.00th=[ 709], 50.00th=[ 717], 60.00th=[ 734], 00:24:28.908 | 70.00th=[ 750], 80.00th=[ 766], 90.00th=[ 807], 95.00th=[ 840], 00:24:28.908 | 99.00th=[ 914], 99.50th=[ 938], 99.90th=[ 996], 99.95th=[ 1020], 00:24:28.908 | 99.99th=[ 1090] 00:24:28.908 bw ( KiB/s): min=20416, max=21472, per=49.99%, avg=20951.58, stdev=302.23, samples=19 00:24:28.908 iops : min= 5104, max= 5368, avg=5237.89, stdev=75.56, samples=19 00:24:28.908 lat (usec) : 500=0.01%, 750=71.81%, 1000=28.09% 00:24:28.908 lat (msec) : 2=0.09%, 4=0.01% 00:24:28.908 cpu : usr=89.69%, sys=9.00%, ctx=8, majf=0, minf=0 00:24:28.908 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:28.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:28.908 issued rwts: total=52400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:28.908 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:28.908 00:24:28.908 Run status group 0 (all jobs): 00:24:28.908 READ: bw=40.9MiB/s (42.9MB/s), 20.5MiB/s-20.5MiB/s (21.5MB/s-21.5MB/s), io=409MiB (429MB), run=10001-10001msec 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 ************************************ 00:24:28.908 END TEST fio_dif_1_multi_subsystems 00:24:28.908 ************************************ 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 00:24:28.908 real 0m11.000s 00:24:28.908 user 0m18.609s 00:24:28.908 sys 0m2.031s 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:28.908 01:45:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 01:45:58 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:28.908 01:45:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:28.908 01:45:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:28.908 01:45:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 ************************************ 00:24:28.908 START TEST fio_dif_rand_params 00:24:28.908 ************************************ 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 bdev_null0 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:28.908 [2024-12-16 01:45:59.042348] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.908 { 00:24:28.908 "params": { 00:24:28.908 "name": "Nvme$subsystem", 00:24:28.908 "trtype": "$TEST_TRANSPORT", 00:24:28.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.908 "adrfam": "ipv4", 00:24:28.908 "trsvcid": "$NVMF_PORT", 00:24:28.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.908 "hdgst": ${hdgst:-false}, 00:24:28.908 "ddgst": ${ddgst:-false} 00:24:28.908 }, 00:24:28.908 "method": "bdev_nvme_attach_controller" 00:24:28.908 } 00:24:28.908 EOF 00:24:28.908 )") 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:28.908 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:28.909 "params": { 00:24:28.909 "name": "Nvme0", 00:24:28.909 "trtype": "tcp", 00:24:28.909 "traddr": "10.0.0.3", 00:24:28.909 "adrfam": "ipv4", 00:24:28.909 "trsvcid": "4420", 00:24:28.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:28.909 "hdgst": false, 00:24:28.909 "ddgst": false 00:24:28.909 }, 00:24:28.909 "method": "bdev_nvme_attach_controller" 00:24:28.909 }' 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:28.909 01:45:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:28.909 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:28.909 ... 00:24:28.909 fio-3.35 00:24:28.909 Starting 3 threads 00:24:34.181 00:24:34.181 filename0: (groupid=0, jobs=1): err= 0: pid=100562: Mon Dec 16 01:46:04 2024 00:24:34.181 read: IOPS=272, BW=34.1MiB/s (35.8MB/s)(171MiB/5004msec) 00:24:34.181 slat (nsec): min=6668, max=39243, avg=9391.12, stdev=3682.83 00:24:34.181 clat (usec): min=4225, max=13033, avg=10974.59, stdev=526.41 00:24:34.181 lat (usec): min=4239, max=13045, avg=10983.99, stdev=526.33 00:24:34.181 clat percentiles (usec): 00:24:34.181 | 1.00th=[10552], 5.00th=[10683], 10.00th=[10683], 20.00th=[10683], 00:24:34.181 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:24:34.181 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:24:34.181 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13042], 99.95th=[13042], 00:24:34.181 | 99.99th=[13042] 00:24:34.181 bw ( KiB/s): min=33792, max=36096, per=33.35%, avg=34901.33, stdev=677.31, samples=9 00:24:34.181 iops : min= 264, max= 282, avg=272.67, stdev= 5.29, samples=9 00:24:34.181 lat (msec) : 10=0.22%, 20=99.78% 00:24:34.181 cpu : usr=90.63%, sys=8.81%, ctx=9, majf=0, minf=0 00:24:34.181 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.181 issued rwts: total=1365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.181 filename0: (groupid=0, jobs=1): err= 0: pid=100563: Mon Dec 16 01:46:04 2024 00:24:34.181 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(171MiB/5008msec) 00:24:34.181 slat (nsec): min=6700, max=45551, avg=12651.00, stdev=5099.34 00:24:34.181 clat (usec): min=7846, max=13180, avg=10977.10, stdev=459.64 00:24:34.181 lat (usec): min=7853, max=13192, avg=10989.75, stdev=459.88 00:24:34.181 clat percentiles (usec): 00:24:34.181 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:24:34.181 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:24:34.181 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:24:34.181 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:24:34.181 | 99.99th=[13173] 00:24:34.181 bw ( KiB/s): min=34560, max=35328, per=33.31%, avg=34867.20, stdev=396.59, samples=10 00:24:34.181 iops : min= 270, max= 276, avg=272.40, stdev= 3.10, samples=10 00:24:34.181 lat (msec) : 10=0.44%, 20=99.56% 00:24:34.181 cpu : usr=91.11%, sys=8.35%, ctx=7, majf=0, minf=0 00:24:34.181 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.181 issued rwts: total=1365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.181 filename0: (groupid=0, jobs=1): err= 0: pid=100564: Mon Dec 16 01:46:04 2024 00:24:34.181 read: IOPS=272, BW=34.1MiB/s (35.7MB/s)(171MiB/5008msec) 00:24:34.181 slat (nsec): min=6708, max=53441, avg=12888.01, stdev=5614.87 00:24:34.181 clat (usec): min=7470, max=13166, avg=10976.50, stdev=486.55 00:24:34.181 lat (usec): min=7477, max=13185, avg=10989.39, stdev=487.04 00:24:34.181 clat percentiles (usec): 00:24:34.181 | 1.00th=[10552], 5.00th=[10552], 10.00th=[10683], 20.00th=[10683], 00:24:34.181 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10814], 60.00th=[10814], 00:24:34.181 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:24:34.181 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13173], 99.95th=[13173], 00:24:34.181 | 99.99th=[13173] 00:24:34.181 bw ( KiB/s): min=34560, max=35328, per=33.31%, avg=34867.20, stdev=396.59, samples=10 00:24:34.181 iops : min= 270, max= 276, avg=272.40, stdev= 3.10, samples=10 00:24:34.181 lat (msec) : 10=0.44%, 20=99.56% 00:24:34.181 cpu : usr=90.55%, sys=8.49%, ctx=71, majf=0, minf=0 00:24:34.181 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.181 issued rwts: total=1365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.181 00:24:34.181 Run status group 0 (all jobs): 00:24:34.181 READ: bw=102MiB/s (107MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.8MB/s), io=512MiB (537MB), run=5004-5008msec 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.441 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.441 bdev_null0 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 [2024-12-16 01:46:04.936932] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 bdev_null1 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 bdev_null2 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.442 01:46:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:34.442 { 00:24:34.442 "params": { 00:24:34.442 "name": "Nvme$subsystem", 00:24:34.442 "trtype": "$TEST_TRANSPORT", 00:24:34.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.442 "adrfam": "ipv4", 00:24:34.442 "trsvcid": "$NVMF_PORT", 00:24:34.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.442 "hdgst": ${hdgst:-false}, 00:24:34.442 "ddgst": ${ddgst:-false} 00:24:34.442 }, 00:24:34.442 "method": "bdev_nvme_attach_controller" 00:24:34.442 } 00:24:34.442 EOF 00:24:34.442 )") 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:34.442 { 00:24:34.442 "params": { 00:24:34.442 "name": "Nvme$subsystem", 00:24:34.442 "trtype": "$TEST_TRANSPORT", 00:24:34.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.442 "adrfam": "ipv4", 00:24:34.442 "trsvcid": "$NVMF_PORT", 00:24:34.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.442 "hdgst": ${hdgst:-false}, 00:24:34.442 "ddgst": ${ddgst:-false} 00:24:34.442 }, 00:24:34.442 "method": "bdev_nvme_attach_controller" 00:24:34.442 } 00:24:34.442 EOF 00:24:34.442 )") 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:34.442 { 00:24:34.442 "params": { 00:24:34.442 "name": "Nvme$subsystem", 00:24:34.442 "trtype": "$TEST_TRANSPORT", 00:24:34.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.442 "adrfam": "ipv4", 00:24:34.442 "trsvcid": "$NVMF_PORT", 00:24:34.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.442 "hdgst": ${hdgst:-false}, 00:24:34.442 "ddgst": ${ddgst:-false} 00:24:34.442 }, 00:24:34.442 "method": "bdev_nvme_attach_controller" 00:24:34.442 } 00:24:34.442 EOF 00:24:34.442 )") 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:34.442 01:46:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:34.442 "params": { 00:24:34.442 "name": "Nvme0", 00:24:34.442 "trtype": "tcp", 00:24:34.442 "traddr": "10.0.0.3", 00:24:34.442 "adrfam": "ipv4", 00:24:34.443 "trsvcid": "4420", 00:24:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:34.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:34.443 "hdgst": false, 00:24:34.443 "ddgst": false 00:24:34.443 }, 00:24:34.443 "method": "bdev_nvme_attach_controller" 00:24:34.443 },{ 00:24:34.443 "params": { 00:24:34.443 "name": "Nvme1", 00:24:34.443 "trtype": "tcp", 00:24:34.443 "traddr": "10.0.0.3", 00:24:34.443 "adrfam": "ipv4", 00:24:34.443 "trsvcid": "4420", 00:24:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.443 "hdgst": false, 00:24:34.443 "ddgst": false 00:24:34.443 }, 00:24:34.443 "method": "bdev_nvme_attach_controller" 00:24:34.443 },{ 00:24:34.443 "params": { 00:24:34.443 "name": "Nvme2", 00:24:34.443 "trtype": "tcp", 00:24:34.443 "traddr": "10.0.0.3", 00:24:34.443 "adrfam": "ipv4", 00:24:34.443 "trsvcid": "4420", 00:24:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:34.443 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:34.443 "hdgst": false, 00:24:34.443 "ddgst": false 00:24:34.443 }, 00:24:34.443 "method": "bdev_nvme_attach_controller" 00:24:34.443 }' 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:34.443 01:46:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:34.712 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:34.712 ... 00:24:34.712 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:34.712 ... 00:24:34.712 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:34.712 ... 00:24:34.712 fio-3.35 00:24:34.712 Starting 24 threads 00:24:46.923 fio: pid=100659, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.923 [2024-12-16 01:46:16.527874] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1d27e00 via correct icresp 00:24:46.923 [2024-12-16 01:46:16.527941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d27e00 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=65564672, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=58109952, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=44072960, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=34648064, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=40460288, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=65613824, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=966656, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=15175680, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=21958656, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=56102912, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=45244416, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=37945344, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=63819776, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=25673728, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=864256, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=50180096, buflen=4096 00:24:46.923 [2024-12-16 01:46:16.542839] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1d27c20 via correct icresp 00:24:46.923 [2024-12-16 01:46:16.542877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d27c20 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=39387136, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=47931392, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=21307392, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=58040320, buflen=4096 00:24:46.923 fio: pid=100674, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=46010368, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=42749952, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=35491840, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=57552896, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=17387520, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=37646336, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=42897408, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=28008448, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=42741760, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=2555904, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=34897920, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=50384896, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=66551808, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=5672960, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=59187200, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=28147712, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=5283840, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=1843200, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=40157184, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=9199616, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=16580608, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=39104512, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=35598336, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=48218112, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=1863680, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=2641920, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=36839424, buflen=4096 00:24:46.923 fio: io_u error on file Nvme1n1: Input/output error: read offset=33583104, buflen=4096 00:24:46.923 fio: pid=100672, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.923 [2024-12-16 01:46:16.546844] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386e960 via correct icresp 00:24:46.923 [2024-12-16 01:46:16.546880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386e960 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=15663104, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=9408512, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=15581184, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=27107328, buflen=4096 00:24:46.923 fio: pid=100660, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=55336960, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=39804928, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=46354432, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=34320384, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=9441280, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=26243072, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=39927808, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=55951360, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=46231552, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=23080960, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=37081088, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=33689600, buflen=4096 00:24:46.923 [2024-12-16 01:46:16.552844] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x1d27a40 via correct icresp 00:24:46.923 [2024-12-16 01:46:16.552882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d27a40 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=44494848, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=7815168, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=19873792, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=6111232, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=16973824, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=49205248, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=31784960, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=7819264, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=59662336, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=34951168, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=1232896, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=54398976, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=61435904, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=61767680, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=40538112, buflen=4096 00:24:46.923 fio: io_u error on file Nvme2n1: Input/output error: read offset=51154944, buflen=4096 00:24:46.923 [2024-12-16 01:46:16.556849] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386e780 via correct icresp 00:24:46.923 [2024-12-16 01:46:16.556888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386e780 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=51261440, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=23396352, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=57282560, buflen=4096 00:24:46.923 fio: io_u error on file Nvme0n1: Input/output error: read offset=21434368, buflen=4096 00:24:46.923 fio: pid=100663, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=10530816, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=2691072, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=15687680, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=49655808, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=23162880, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=65122304, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=3477504, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=5808128, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=32575488, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=36069376, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=32911360, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=9801728, buflen=4096 00:24:46.924 fio: pid=100681, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 [2024-12-16 01:46:16.569099] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386ef00 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.569299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386ef00 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=47452160, buflen=4096 00:24:46.924 fio: pid=100668, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=34287616, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=831488, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=53870592, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=37478400, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=2457600, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=40656896, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=11014144, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=48861184, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=17141760, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=55959552, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=27402240, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=38703104, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=35835904, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=1691648, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=61239296, buflen=4096 00:24:46.924 fio: pid=100667, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 [2024-12-16 01:46:16.569108] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386e5a0 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386e5a0 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=45527040, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=31428608, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=38137856, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=48013312, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=19353600, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=60690432, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=5431296, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=40865792, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=28303360, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=54108160, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=45944832, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=36769792, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=19599360, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=9601024, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=6950912, buflen=4096 00:24:46.924 fio: io_u error on file Nvme1n1: Input/output error: read offset=58298368, buflen=4096 00:24:46.924 fio: pid=100662, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 [2024-12-16 01:46:16.569103] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386e1e0 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386e1e0 00:24:46.924 [2024-12-16 01:46:16.570797] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386f680 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570808] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386eb40 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570800] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386f860 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570801] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386ed20 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570803] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386f0e0 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570807] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386f2c0 via correct icresp 00:24:46.924 [2024-12-16 01:46:16.570836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386f680 00:24:46.924 [2024-12-16 01:46:16.570936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386eb40 00:24:46.924 [2024-12-16 01:46:16.570970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386f860 00:24:46.924 [2024-12-16 01:46:16.570995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386ed20 00:24:46.924 [2024-12-16 01:46:16.571022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386f0e0 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=23298048, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=37404672, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=53178368, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=782336, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=39247872, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=13889536, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=13529088, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=66187264, buflen=4096 00:24:46.924 [2024-12-16 01:46:16.571106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386f2c0 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=42729472, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=21630976, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=59961344, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=8798208, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=32940032, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=12214272, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=58417152, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=7098368, buflen=4096 00:24:46.924 fio: pid=100678, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 fio: pid=100661, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=2039808, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=37330944, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=58757120, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=49201152, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=32235520, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=66932736, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=6856704, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=49119232, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=14749696, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=32546816, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=36941824, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=29216768, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=45797376, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=45506560, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=6017024, buflen=4096 00:24:46.924 fio: io_u error on file Nvme2n1: Input/output error: read offset=22528000, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=59994112, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=62394368, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=12816384, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=16596992, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=20307968, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=40996864, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=52510720, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=36761600, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=24530944, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=22044672, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=32223232, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=54571008, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=2121728, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=12918784, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=28672, buflen=4096 00:24:46.924 fio: io_u error on file Nvme0n1: Input/output error: read offset=18472960, buflen=4096 00:24:46.924 [2024-12-16 01:46:16.571351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d26000 (9): Bad file descriptor 00:24:46.924 fio: pid=100682, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.924 fio: pid=100671, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.925 fio: pid=100669, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.925 fio: pid=100679, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=38592512, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=49111040, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=16941056, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=34516992, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=16191488, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=10518528, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=50724864, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=53186560, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=34996224, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=37572608, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=19476480, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=27541504, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=44208128, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=36532224, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=38891520, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=65597440, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=49045504, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=24788992, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=33144832, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=9060352, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=38088704, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=60559360, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=40730624, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=9224192, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=913408, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=58646528, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=18313216, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=24666112, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=58052608, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=48078848, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=4517888, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=50819072, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=33353728, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=62767104, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=9048064, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=34938880, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=31834112, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=52871168, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=46837760, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=10993664, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=1417216, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=45432832, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=64974848, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=51412992, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=51191808, buflen=4096 00:24:46.925 [2024-12-16 01:46:16.571512] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386e3c0 via correct icresp 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=63549440, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=26009600, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=61190144, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=32485376, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=12398592, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=61280256, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=48734208, buflen=4096 00:24:46.925 fio: io_u error on file Nvme1n1: Input/output error: read offset=5083136, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=11575296, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=52445184, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=60080128, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=44896256, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=53538816, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=62349312, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=37318656, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=2191360, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=40357888, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=4063232, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=26181632, buflen=4096 00:24:46.925 [2024-12-16 01:46:16.571670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386e3c0 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=38490112, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=36057088, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=62701568, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=29208576, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=61161472, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=13312000, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=55738368, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=11722752, buflen=4096 00:24:46.925 fio: pid=100680, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=51109888, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=53731328, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=39395328, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=3325952, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=3260416, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=49025024, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=11665408, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=10444800, buflen=4096 00:24:46.925 [2024-12-16 01:46:16.572274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d261e0 (9): Bad file descriptor 00:24:46.925 [2024-12-16 01:46:16.572679] nvme_tcp.c:2319:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to construct the tqpair=0x386f4a0 via correct icresp 00:24:46.925 [2024-12-16 01:46:16.572716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x386f4a0 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=56950784, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=28827648, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=49283072, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=65605632, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=58163200, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=34717696, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=34066432, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=49029120, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=50032640, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=9048064, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=25878528, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=53391360, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=44511232, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=14729216, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=33435648, buflen=4096 00:24:46.925 fio: io_u error on file Nvme2n1: Input/output error: read offset=42016768, buflen=4096 00:24:46.925 fio: pid=100675, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:24:46.925 [2024-12-16 01:46:16.575679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d26960 (9): Bad file descriptor 00:24:46.925 00:24:46.925 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100659: Mon Dec 16 01:46:16 2024 00:24:46.925 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.925 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.925 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.925 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100660: Mon Dec 16 01:46:16 2024 00:24:46.925 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.925 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.925 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.925 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.925 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100661: Mon Dec 16 01:46:16 2024 00:24:46.925 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.925 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100662: Mon Dec 16 01:46:16 2024 00:24:46.926 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100663: Mon Dec 16 01:46:16 2024 00:24:46.926 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename0: (groupid=0, jobs=1): err= 0: pid=100664: Mon Dec 16 01:46:16 2024 00:24:46.926 read: IOPS=681, BW=2725KiB/s (2791kB/s)(26.6MiB/10013msec) 00:24:46.926 slat (usec): min=4, max=8029, avg=19.20, stdev=232.36 00:24:46.926 clat (usec): min=1024, max=59756, avg=23359.10, stdev=7784.98 00:24:46.926 lat (usec): min=1035, max=59765, avg=23378.31, stdev=7786.69 00:24:46.926 clat percentiles (usec): 00:24:46.926 | 1.00th=[ 7308], 5.00th=[ 9372], 10.00th=[12256], 20.00th=[17171], 00:24:46.926 | 30.00th=[21627], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:24:46.926 | 70.00th=[24249], 80.00th=[27919], 90.00th=[35914], 95.00th=[35914], 00:24:46.926 | 99.00th=[41681], 99.50th=[47449], 99.90th=[58459], 99.95th=[59507], 00:24:46.926 | 99.99th=[59507] 00:24:46.926 bw ( KiB/s): min= 1792, max= 5120, per=12.80%, avg=2729.68, stdev=663.62, samples=19 00:24:46.926 iops : min= 448, max= 1280, avg=682.42, stdev=165.91, samples=19 00:24:46.926 lat (msec) : 2=0.03%, 4=0.22%, 10=5.72%, 20=19.73%, 50=74.10% 00:24:46.926 lat (msec) : 100=0.21% 00:24:46.926 cpu : usr=31.69%, sys=2.46%, ctx=1066, majf=0, minf=9 00:24:46.926 IO depths : 1=1.2%, 2=5.9%, 4=20.1%, 8=60.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=93.2%, 8=2.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=6822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename0: (groupid=0, jobs=1): err= 0: pid=100665: Mon Dec 16 01:46:16 2024 00:24:46.926 read: IOPS=689, BW=2760KiB/s (2826kB/s)(27.0MiB/10016msec) 00:24:46.926 slat (usec): min=6, max=15167, avg=16.89, stdev=237.60 00:24:46.926 clat (usec): min=1291, max=71784, avg=23057.68, stdev=7965.10 00:24:46.926 lat (usec): min=1300, max=71793, avg=23074.57, stdev=7972.37 00:24:46.926 clat percentiles (usec): 00:24:46.926 | 1.00th=[ 3458], 5.00th=[ 9372], 10.00th=[12649], 20.00th=[16057], 00:24:46.926 | 30.00th=[20579], 40.00th=[22938], 50.00th=[23725], 60.00th=[23987], 00:24:46.926 | 70.00th=[23987], 80.00th=[27132], 90.00th=[35390], 95.00th=[35914], 00:24:46.926 | 99.00th=[47973], 99.50th=[47973], 99.90th=[55837], 99.95th=[59507], 00:24:46.926 | 99.99th=[71828] 00:24:46.926 bw ( KiB/s): min= 1776, max= 4688, per=12.93%, avg=2757.60, stdev=572.81, samples=20 00:24:46.926 iops : min= 444, max= 1172, avg=689.40, stdev=143.20, samples=20 00:24:46.926 lat (msec) : 2=0.45%, 4=0.71%, 10=5.01%, 20=21.79%, 50=71.79% 00:24:46.926 lat (msec) : 100=0.25% 00:24:46.926 cpu : usr=38.43%, sys=3.23%, ctx=1130, majf=0, minf=9 00:24:46.926 IO depths : 1=1.4%, 2=6.4%, 4=21.1%, 8=59.4%, 16=11.8%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=93.4%, 8=1.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=6910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename0: (groupid=0, jobs=1): err= 0: pid=100666: Mon Dec 16 01:46:16 2024 00:24:46.926 read: IOPS=732, BW=2929KiB/s (2999kB/s)(28.7MiB/10027msec) 00:24:46.926 slat (usec): min=5, max=10020, avg=16.12, stdev=186.70 00:24:46.926 clat (usec): min=1361, max=55809, avg=21707.16, stdev=8405.56 00:24:46.926 lat (usec): min=1371, max=55818, avg=21723.28, stdev=8407.81 00:24:46.926 clat percentiles (usec): 00:24:46.926 | 1.00th=[ 1663], 5.00th=[ 7177], 10.00th=[ 9896], 20.00th=[15795], 00:24:46.926 | 30.00th=[17957], 40.00th=[21103], 50.00th=[22938], 60.00th=[23987], 00:24:46.926 | 70.00th=[23987], 80.00th=[26084], 90.00th=[32900], 95.00th=[35914], 00:24:46.926 | 99.00th=[42730], 99.50th=[47449], 99.90th=[49021], 99.95th=[55837], 00:24:46.926 | 99.99th=[55837] 00:24:46.926 bw ( KiB/s): min= 1984, max= 6896, per=13.73%, avg=2928.80, stdev=996.49, samples=20 00:24:46.926 iops : min= 496, max= 1724, avg=732.15, stdev=249.14, samples=20 00:24:46.926 lat (msec) : 2=2.86%, 4=0.75%, 10=6.47%, 20=26.33%, 50=63.51% 00:24:46.926 lat (msec) : 100=0.08% 00:24:46.926 cpu : usr=42.27%, sys=3.59%, ctx=1359, majf=0, minf=0 00:24:46.926 IO depths : 1=1.1%, 2=5.3%, 4=18.2%, 8=62.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=92.8%, 8=2.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=7342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100667: Mon Dec 16 01:46:16 2024 00:24:46.926 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100668: Mon Dec 16 01:46:16 2024 00:24:46.926 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:24:46.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100669: Mon Dec 16 01:46:16 2024 00:24:46.926 cpu : usr=0.00%, sys=0.00%, ctx=1, majf=0, minf=0 00:24:46.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename1: (groupid=0, jobs=1): err= 0: pid=100670: Mon Dec 16 01:46:16 2024 00:24:46.926 read: IOPS=713, BW=2855KiB/s (2923kB/s)(27.9MiB/10013msec) 00:24:46.926 slat (usec): min=4, max=4027, avg=15.84, stdev=126.58 00:24:46.926 clat (usec): min=1813, max=63738, avg=22305.37, stdev=7702.06 00:24:46.926 lat (usec): min=1822, max=63747, avg=22321.21, stdev=7703.60 00:24:46.926 clat percentiles (usec): 00:24:46.926 | 1.00th=[ 5473], 5.00th=[ 9241], 10.00th=[12387], 20.00th=[16057], 00:24:46.926 | 30.00th=[19006], 40.00th=[21103], 50.00th=[22414], 60.00th=[23462], 00:24:46.926 | 70.00th=[24249], 80.00th=[27919], 90.00th=[32375], 95.00th=[35914], 00:24:46.926 | 99.00th=[41681], 99.50th=[44827], 99.90th=[49546], 99.95th=[54789], 00:24:46.926 | 99.99th=[63701] 00:24:46.926 bw ( KiB/s): min= 1912, max= 4472, per=13.28%, avg=2832.42, stdev=547.12, samples=19 00:24:46.926 iops : min= 478, max= 1118, avg=708.11, stdev=136.78, samples=19 00:24:46.926 lat (msec) : 2=0.07%, 4=0.11%, 10=6.45%, 20=28.16%, 50=65.13% 00:24:46.926 lat (msec) : 100=0.08% 00:24:46.926 cpu : usr=41.01%, sys=3.34%, ctx=1417, majf=0, minf=9 00:24:46.926 IO depths : 1=1.2%, 2=5.4%, 4=18.2%, 8=62.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=92.6%, 8=3.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=7146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100671: Mon Dec 16 01:46:16 2024 00:24:46.926 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.926 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.926 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.926 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100672: Mon Dec 16 01:46:16 2024 00:24:46.926 read: IOPS=795, BW=3173KiB/s (3249kB/s)(20.2MiB/6507msec) 00:24:46.926 slat (usec): min=4, max=8020, avg=19.88, stdev=260.91 00:24:46.926 clat (usec): min=1652, max=59755, avg=20003.42, stdev=10746.62 00:24:46.926 lat (usec): min=1660, max=59764, avg=20023.34, stdev=10743.99 00:24:46.926 clat percentiles (usec): 00:24:46.926 | 1.00th=[ 1713], 5.00th=[ 1795], 10.00th=[ 3228], 20.00th=[ 9503], 00:24:46.926 | 30.00th=[12125], 40.00th=[20841], 50.00th=[23462], 60.00th=[23987], 00:24:46.926 | 70.00th=[23987], 80.00th=[26346], 90.00th=[35914], 95.00th=[35914], 00:24:46.926 | 99.00th=[47449], 99.50th=[47973], 99.90th=[54789], 99.95th=[59507], 00:24:46.926 | 99.99th=[59507] 00:24:46.926 bw ( KiB/s): min= 1848, max= 5388, per=12.67%, avg=2700.33, stdev=914.95, samples=12 00:24:46.926 iops : min= 462, max= 1347, avg=675.08, stdev=228.74, samples=12 00:24:46.926 lat (msec) : 2=8.09%, 4=3.01%, 10=12.15%, 20=15.22%, 50=61.10% 00:24:46.926 lat (msec) : 100=0.12% 00:24:46.926 cpu : usr=33.17%, sys=3.10%, ctx=617, majf=0, minf=9 00:24:46.926 IO depths : 1=1.4%, 2=6.3%, 4=20.7%, 8=59.6%, 16=12.0%, 32=0.0%, >=64=0.0% 00:24:46.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 complete : 0=0.1%, 4=93.4%, 8=1.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.926 issued rwts: total=5177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename1: (groupid=0, jobs=1): err= 0: pid=100673: Mon Dec 16 01:46:16 2024 00:24:46.927 read: IOPS=662, BW=2648KiB/s (2712kB/s)(25.9MiB/10016msec) 00:24:46.927 slat (usec): min=4, max=11023, avg=16.36, stdev=204.39 00:24:46.927 clat (usec): min=1629, max=71777, avg=24012.38, stdev=7394.19 00:24:46.927 lat (usec): min=1638, max=71786, avg=24028.74, stdev=7399.66 00:24:46.927 clat percentiles (usec): 00:24:46.927 | 1.00th=[10028], 5.00th=[11994], 10.00th=[12780], 20.00th=[19792], 00:24:46.927 | 30.00th=[21890], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:24:46.927 | 70.00th=[24249], 80.00th=[29754], 90.00th=[35914], 95.00th=[35914], 00:24:46.927 | 99.00th=[47449], 99.50th=[47973], 99.90th=[50594], 99.95th=[58459], 00:24:46.927 | 99.99th=[71828] 00:24:46.927 bw ( KiB/s): min= 1904, max= 4096, per=12.41%, avg=2646.80, stdev=480.27, samples=20 00:24:46.927 iops : min= 476, max= 1024, avg=661.70, stdev=120.07, samples=20 00:24:46.927 lat (msec) : 2=0.03%, 4=0.03%, 10=0.92%, 20=19.85%, 50=79.05% 00:24:46.927 lat (msec) : 100=0.12% 00:24:46.927 cpu : usr=35.95%, sys=2.73%, ctx=1181, majf=0, minf=9 00:24:46.927 IO depths : 1=1.7%, 2=6.7%, 4=20.9%, 8=59.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=93.4%, 8=1.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=6631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100674: Mon Dec 16 01:46:16 2024 00:24:46.927 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100675: Mon Dec 16 01:46:16 2024 00:24:46.927 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 0: pid=100676: Mon Dec 16 01:46:16 2024 00:24:46.927 read: IOPS=670, BW=2681KiB/s (2745kB/s)(26.2MiB/10002msec) 00:24:46.927 slat (usec): min=8, max=10022, avg=22.68, stdev=304.93 00:24:46.927 clat (usec): min=2186, max=65945, avg=23730.07, stdev=8223.65 00:24:46.927 lat (usec): min=2196, max=65954, avg=23752.75, stdev=8232.99 00:24:46.927 clat percentiles (usec): 00:24:46.927 | 1.00th=[ 6718], 5.00th=[ 9896], 10.00th=[12387], 20.00th=[17433], 00:24:46.927 | 30.00th=[21365], 40.00th=[23200], 50.00th=[23987], 60.00th=[23987], 00:24:46.927 | 70.00th=[24249], 80.00th=[28705], 90.00th=[35914], 95.00th=[36439], 00:24:46.927 | 99.00th=[47973], 99.50th=[48497], 99.90th=[52167], 99.95th=[61080], 00:24:46.927 | 99.99th=[65799] 00:24:46.927 bw ( KiB/s): min= 1832, max= 4864, per=12.59%, avg=2684.63, stdev=673.29, samples=19 00:24:46.927 iops : min= 458, max= 1216, avg=671.16, stdev=168.32, samples=19 00:24:46.927 lat (msec) : 4=0.09%, 10=5.45%, 20=19.98%, 50=74.07%, 100=0.42% 00:24:46.927 cpu : usr=31.61%, sys=2.74%, ctx=1181, majf=0, minf=9 00:24:46.927 IO depths : 1=1.5%, 2=6.7%, 4=21.5%, 8=58.6%, 16=11.6%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=6703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 0: pid=100677: Mon Dec 16 01:46:16 2024 00:24:46.927 read: IOPS=671, BW=2685KiB/s (2749kB/s)(26.3MiB/10020msec) 00:24:46.927 slat (usec): min=6, max=8023, avg=19.56, stdev=234.13 00:24:46.927 clat (usec): min=1521, max=59772, avg=23691.90, stdev=8070.64 00:24:46.927 lat (usec): min=1530, max=59780, avg=23711.46, stdev=8074.90 00:24:46.927 clat percentiles (usec): 00:24:46.927 | 1.00th=[ 7308], 5.00th=[ 9372], 10.00th=[12911], 20.00th=[16057], 00:24:46.927 | 30.00th=[22414], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:24:46.927 | 70.00th=[23987], 80.00th=[27395], 90.00th=[35914], 95.00th=[35914], 00:24:46.927 | 99.00th=[47973], 99.50th=[47973], 99.90th=[59507], 99.95th=[59507], 00:24:46.927 | 99.99th=[60031] 00:24:46.927 bw ( KiB/s): min= 1808, max= 5056, per=12.59%, avg=2683.60, stdev=666.46, samples=20 00:24:46.927 iops : min= 452, max= 1264, avg=670.90, stdev=166.61, samples=20 00:24:46.927 lat (msec) : 2=0.03%, 4=0.30%, 10=6.17%, 20=16.80%, 50=76.52% 00:24:46.927 lat (msec) : 100=0.18% 00:24:46.927 cpu : usr=34.67%, sys=2.86%, ctx=972, majf=0, minf=9 00:24:46.927 IO depths : 1=1.3%, 2=6.1%, 4=20.4%, 8=60.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=93.4%, 8=1.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=6725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100678: Mon Dec 16 01:46:16 2024 00:24:46.927 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100679: Mon Dec 16 01:46:16 2024 00:24:46.927 cpu : usr=0.00%, sys=0.00%, ctx=4, majf=0, minf=0 00:24:46.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100680: Mon Dec 16 01:46:16 2024 00:24:46.927 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100681: Mon Dec 16 01:46:16 2024 00:24:46.927 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.927 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.927 filename2: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=100682: Mon Dec 16 01:46:16 2024 00:24:46.927 cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0 00:24:46.927 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.927 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:46.928 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:46.928 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:46.928 00:24:46.928 Run status group 0 (all jobs): 00:24:46.928 READ: bw=20.8MiB/s (21.8MB/s), 2648KiB/s-3173KiB/s (2712kB/s-3249kB/s), io=209MiB (219MB), run=6507-10027msec 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # trap - ERR 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # print_backtrace 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1159 -- # args=('/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' '/dev/fd/61' '/dev/fd/62' '--spdk_json_conf' '--ioengine=spdk_bdev' '/dev/fd/62' 'fio_dif_rand_params' 'fio_dif_rand_params' '--iso' '--transport=tcp') 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1159 -- # local args 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1161 -- # xtrace_disable 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:46.928 ========== Backtrace start: ========== 00:24:46.928 00:24:46.928 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1356 -> fio_plugin(["/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev"],["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:24:46.928 ... 00:24:46.928 1351 break 00:24:46.928 1352 fi 00:24:46.928 1353 done 00:24:46.928 1354 00:24:46.928 1355 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:24:46.928 1356 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:24:46.928 1357 } 00:24:46.928 1358 00:24:46.928 1359 function fio_bdev() { 00:24:46.928 1360 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:24:46.928 1361 } 00:24:46.928 ... 00:24:46.928 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1360 -> fio_bdev(["--ioengine=spdk_bdev"],["--spdk_json_conf"],["/dev/fd/62"],["/dev/fd/61"]) 00:24:46.928 ... 00:24:46.928 1355 # Preload the sanitizer library to fio if fio_plugin was compiled with it 00:24:46.928 1356 LD_PRELOAD="$asan_lib $plugin" "$fio_dir"/fio "$@" 00:24:46.928 1357 } 00:24:46.928 1358 00:24:46.928 1359 function fio_bdev() { 00:24:46.928 1360 fio_plugin "$rootdir/build/fio/spdk_bdev" "$@" 00:24:46.928 1361 } 00:24:46.928 1362 00:24:46.928 1363 function fio_nvme() { 00:24:46.928 1364 fio_plugin "$rootdir/build/fio/spdk_nvme" "$@" 00:24:46.928 1365 } 00:24:46.928 ... 00:24:46.928 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:82 -> fio(["/dev/fd/62"]) 00:24:46.928 ... 00:24:46.928 77 FIO 00:24:46.928 78 done 00:24:46.928 79 } 00:24:46.928 80 00:24:46.928 81 fio() { 00:24:46.928 => 82 fio_bdev --ioengine=spdk_bdev --spdk_json_conf "$@" <(gen_fio_conf) 00:24:46.928 83 } 00:24:46.928 84 00:24:46.928 85 fio_dif_1() { 00:24:46.928 86 create_subsystems 0 00:24:46.928 87 fio <(create_json_sub_conf 0) 00:24:46.928 ... 00:24:46.928 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:112 -> fio_dif_rand_params([]) 00:24:46.928 ... 00:24:46.928 107 destroy_subsystems 0 00:24:46.928 108 00:24:46.928 109 NULL_DIF=2 bs=4k numjobs=8 iodepth=16 runtime="" files=2 00:24:46.928 110 00:24:46.928 111 create_subsystems 0 1 2 00:24:46.928 => 112 fio <(create_json_sub_conf 0 1 2) 00:24:46.928 113 destroy_subsystems 0 1 2 00:24:46.928 114 00:24:46.928 115 NULL_DIF=1 bs=8k,16k,128k numjobs=2 iodepth=8 runtime=5 files=1 00:24:46.928 116 00:24:46.928 117 create_subsystems 0 1 00:24:46.928 ... 00:24:46.928 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1129 -> run_test(["fio_dif_rand_params"],["fio_dif_rand_params"]) 00:24:46.928 ... 00:24:46.928 1124 timing_enter $test_name 00:24:46.928 1125 echo "************************************" 00:24:46.928 1126 echo "START TEST $test_name" 00:24:46.928 1127 echo "************************************" 00:24:46.928 1128 xtrace_restore 00:24:46.928 1129 time "$@" 00:24:46.928 1130 xtrace_disable 00:24:46.928 1131 echo "************************************" 00:24:46.928 1132 echo "END TEST $test_name" 00:24:46.928 1133 echo "************************************" 00:24:46.928 1134 timing_exit $test_name 00:24:46.928 ... 00:24:46.928 in /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh:143 -> main(["--transport=tcp"],["--iso"]) 00:24:46.928 ... 00:24:46.928 138 00:24:46.928 139 create_transport 00:24:46.928 140 00:24:46.928 141 run_test "fio_dif_1_default" fio_dif_1 00:24:46.928 142 run_test "fio_dif_1_multi_subsystems" fio_dif_1_multi_subsystems 00:24:46.928 => 143 run_test "fio_dif_rand_params" fio_dif_rand_params 00:24:46.928 144 run_test "fio_dif_digest" fio_dif_digest 00:24:46.928 145 00:24:46.928 146 trap - SIGINT SIGTERM EXIT 00:24:46.928 147 nvmftestfini 00:24:46.928 ... 00:24:46.928 00:24:46.928 ========== Backtrace end ========== 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1198 -- # return 0 00:24:46.928 00:24:46.928 real 0m17.780s 00:24:46.928 user 1m53.032s 00:24:46.928 sys 0m4.136s 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # process_shm --id 0 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@812 -- # type=--id 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@813 -- # id=0 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:46.928 nvmf_trace.0 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@827 -- # return 0 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1 -- # nvmftestfini 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@121 -- # sync 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@124 -- # set +e 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.928 rmmod nvme_tcp 00:24:46.928 rmmod nvme_fabrics 00:24:46.928 rmmod nvme_keyring 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@128 -- # set -e 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@129 -- # return 0 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@517 -- # '[' -n 100184 ']' 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@518 -- # killprocess 100184 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@954 -- # '[' -z 100184 ']' 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@958 -- # kill -0 100184 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@959 -- # uname 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100184 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:46.928 killing process with pid 100184 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100184' 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@973 -- # kill 100184 00:24:46.928 01:46:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@978 -- # wait 100184 00:24:46.928 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:46.928 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:46.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:46.928 Waiting for block devices as requested 00:24:46.928 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:47.188 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@297 -- # iptr 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # iptables-save 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:47.188 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:47.447 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:47.447 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:47.447 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.447 01:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:47.447 01:46:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:47.447 01:46:17 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@300 -- # return 0 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1129 -- # trap - ERR 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1129 -- # print_backtrace 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1157 -- # [[ ehxBET =~ e ]] 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1159 -- # args=('/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh' 'nvmf_dif' '/home/vagrant/spdk_repo/autorun-spdk.conf') 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1159 -- # local args 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1161 -- # xtrace_disable 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:47.447 ========== Backtrace start: ========== 00:24:47.447 00:24:47.447 in /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh:1129 -> run_test(["nvmf_dif"],["/home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh"]) 00:24:47.447 ... 00:24:47.447 1124 timing_enter $test_name 00:24:47.447 1125 echo "************************************" 00:24:47.447 1126 echo "START TEST $test_name" 00:24:47.447 1127 echo "************************************" 00:24:47.447 1128 xtrace_restore 00:24:47.447 1129 time "$@" 00:24:47.447 1130 xtrace_disable 00:24:47.447 1131 echo "************************************" 00:24:47.447 1132 echo "END TEST $test_name" 00:24:47.447 1133 echo "************************************" 00:24:47.447 1134 timing_exit $test_name 00:24:47.447 ... 00:24:47.447 in /home/vagrant/spdk_repo/spdk/autotest.sh:289 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:24:47.447 ... 00:24:47.447 284 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:47.447 285 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:24:47.447 286 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:47.447 287 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:24:47.447 288 fi 00:24:47.447 => 289 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:24:47.447 290 run_test "nvmf_abort_qd_sizes" $rootdir/test/nvmf/target/abort_qd_sizes.sh 00:24:47.447 291 # The keyring tests utilize NVMe/TLS 00:24:47.447 292 run_test "keyring_file" "$rootdir/test/keyring/file.sh" 00:24:47.447 293 if [[ "$CONFIG_HAVE_KEYUTILS" == y ]]; then 00:24:47.447 294 run_test "keyring_linux" "$rootdir/scripts/keyctl-session-wrapper" \ 00:24:47.447 ... 00:24:47.447 00:24:47.447 ========== Backtrace end ========== 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1198 -- # return 0 00:24:47.447 00:24:47.447 real 0m42.369s 00:24:47.447 user 2m52.377s 00:24:47.447 sys 0m11.898s 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1 -- # autotest_cleanup 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1396 -- # local autotest_es=17 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:47.447 01:46:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:59.656 INFO: APP EXITING 00:24:59.656 INFO: killing all VMs 00:24:59.656 INFO: killing vhost app 00:24:59.656 INFO: EXIT DONE 00:24:59.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:59.915 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:24:59.915 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:00.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.483 Cleaning 00:25:00.483 Removing: /var/run/dpdk/spdk0/config 00:25:00.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:00.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:00.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:00.483 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:00.483 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:00.483 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:00.483 Removing: /var/run/dpdk/spdk1/config 00:25:00.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:00.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:00.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:00.483 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:00.483 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:00.483 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:00.483 Removing: /var/run/dpdk/spdk2/config 00:25:00.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:00.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:00.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:00.483 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:00.483 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:00.483 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:00.483 Removing: /var/run/dpdk/spdk3/config 00:25:00.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:00.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:00.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:00.483 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:00.483 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:00.483 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:00.483 Removing: /var/run/dpdk/spdk4/config 00:25:00.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:00.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:00.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:00.483 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:00.483 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:00.743 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:00.743 Removing: /dev/shm/nvmf_trace.0 00:25:00.743 Removing: /dev/shm/spdk_tgt_trace.pid71765 00:25:00.743 Removing: /var/run/dpdk/spdk0 00:25:00.743 Removing: /var/run/dpdk/spdk1 00:25:00.743 Removing: /var/run/dpdk/spdk2 00:25:00.743 Removing: /var/run/dpdk/spdk3 00:25:00.743 Removing: /var/run/dpdk/spdk4 00:25:00.743 Removing: /var/run/dpdk/spdk_pid100238 00:25:00.743 Removing: /var/run/dpdk/spdk_pid100395 00:25:00.743 Removing: /var/run/dpdk/spdk_pid100547 00:25:00.743 Removing: /var/run/dpdk/spdk_pid100644 00:25:00.743 Removing: /var/run/dpdk/spdk_pid71612 00:25:00.743 Removing: /var/run/dpdk/spdk_pid71765 00:25:00.743 Removing: /var/run/dpdk/spdk_pid71958 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72039 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72067 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72176 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72194 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72328 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72524 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72672 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72750 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72821 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72913 00:25:00.743 Removing: /var/run/dpdk/spdk_pid72985 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73018 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73048 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73123 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73193 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73641 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73680 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73720 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73723 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73778 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73792 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73846 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73849 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73899 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73905 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73945 00:25:00.743 Removing: /var/run/dpdk/spdk_pid73963 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74086 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74122 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74204 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74525 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74543 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74574 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74587 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74603 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74622 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74635 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74651 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74664 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74678 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74693 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74712 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74726 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74741 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74759 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74774 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74784 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74803 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74816 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74832 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74862 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74876 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74905 00:25:00.743 Removing: /var/run/dpdk/spdk_pid74972 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75000 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75010 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75033 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75048 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75050 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75092 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75106 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75129 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75144 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75148 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75152 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75167 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75171 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75180 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75190 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75213 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75245 00:25:00.743 Removing: /var/run/dpdk/spdk_pid75249 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75277 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75287 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75289 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75335 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75341 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75373 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75375 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75377 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75390 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75392 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75398 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75407 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75409 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75491 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75533 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75640 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75679 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75724 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75733 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75755 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75770 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75801 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75817 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75889 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75905 00:25:01.002 Removing: /var/run/dpdk/spdk_pid75949 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76011 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76073 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76096 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76196 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76238 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76271 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76497 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76588 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76618 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76642 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76681 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76709 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76748 00:25:01.002 Removing: /var/run/dpdk/spdk_pid76774 00:25:01.002 Removing: /var/run/dpdk/spdk_pid77165 00:25:01.002 Removing: /var/run/dpdk/spdk_pid77205 00:25:01.002 Removing: /var/run/dpdk/spdk_pid77541 00:25:01.002 Removing: /var/run/dpdk/spdk_pid78002 00:25:01.002 Removing: /var/run/dpdk/spdk_pid78265 00:25:01.003 Removing: /var/run/dpdk/spdk_pid79105 00:25:01.003 Removing: /var/run/dpdk/spdk_pid80007 00:25:01.003 Removing: /var/run/dpdk/spdk_pid80120 00:25:01.003 Removing: /var/run/dpdk/spdk_pid80189 00:25:01.003 Removing: /var/run/dpdk/spdk_pid81588 00:25:01.003 Removing: /var/run/dpdk/spdk_pid81890 00:25:01.003 Removing: /var/run/dpdk/spdk_pid85612 00:25:01.003 Removing: /var/run/dpdk/spdk_pid85972 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86081 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86213 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86234 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86255 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86275 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86361 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86489 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86625 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86701 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86888 00:25:01.003 Removing: /var/run/dpdk/spdk_pid86955 00:25:01.003 Removing: /var/run/dpdk/spdk_pid87036 00:25:01.003 Removing: /var/run/dpdk/spdk_pid87381 00:25:01.003 Removing: /var/run/dpdk/spdk_pid87793 00:25:01.003 Removing: /var/run/dpdk/spdk_pid87794 00:25:01.003 Removing: /var/run/dpdk/spdk_pid87795 00:25:01.003 Removing: /var/run/dpdk/spdk_pid88054 00:25:01.003 Removing: /var/run/dpdk/spdk_pid88295 00:25:01.003 Removing: /var/run/dpdk/spdk_pid88297 00:25:01.003 Removing: /var/run/dpdk/spdk_pid90597 00:25:01.003 Removing: /var/run/dpdk/spdk_pid90977 00:25:01.003 Removing: /var/run/dpdk/spdk_pid90980 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91300 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91318 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91332 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91363 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91368 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91459 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91461 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91569 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91571 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91679 00:25:01.003 Removing: /var/run/dpdk/spdk_pid91687 00:25:01.003 Removing: /var/run/dpdk/spdk_pid92121 00:25:01.262 Removing: /var/run/dpdk/spdk_pid92169 00:25:01.262 Removing: /var/run/dpdk/spdk_pid92278 00:25:01.262 Removing: /var/run/dpdk/spdk_pid92357 00:25:01.262 Removing: /var/run/dpdk/spdk_pid92703 00:25:01.262 Removing: /var/run/dpdk/spdk_pid92899 00:25:01.262 Removing: /var/run/dpdk/spdk_pid93319 00:25:01.262 Removing: /var/run/dpdk/spdk_pid93854 00:25:01.262 Removing: /var/run/dpdk/spdk_pid94697 00:25:01.262 Removing: /var/run/dpdk/spdk_pid95327 00:25:01.262 Removing: /var/run/dpdk/spdk_pid95330 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97330 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97383 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97430 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97478 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97587 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97633 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97683 00:25:01.262 Removing: /var/run/dpdk/spdk_pid97730 00:25:01.262 Removing: /var/run/dpdk/spdk_pid98084 00:25:01.262 Removing: /var/run/dpdk/spdk_pid99290 00:25:01.262 Removing: /var/run/dpdk/spdk_pid99418 00:25:01.262 Removing: /var/run/dpdk/spdk_pid99653 00:25:01.262 Clean 00:25:01.521 01:46:32 nvmf_dif -- common/autotest_common.sh@1453 -- # return 17 00:25:01.521 01:46:32 nvmf_dif -- common/autotest_common.sh@1 -- # : 00:25:01.521 01:46:32 nvmf_dif -- common/autotest_common.sh@1 -- # exit 1 00:25:01.521 01:46:32 -- spdk/autorun.sh@27 -- $ trap - ERR 00:25:01.521 01:46:32 -- spdk/autorun.sh@27 -- $ print_backtrace 00:25:01.521 01:46:32 -- common/autotest_common.sh@1157 -- $ [[ ehxBET =~ e ]] 00:25:01.521 01:46:32 -- common/autotest_common.sh@1159 -- $ args=('/home/vagrant/spdk_repo/autorun-spdk.conf') 00:25:01.521 01:46:32 -- common/autotest_common.sh@1159 -- $ local args 00:25:01.521 01:46:32 -- common/autotest_common.sh@1161 -- $ xtrace_disable 00:25:01.521 01:46:32 -- common/autotest_common.sh@10 -- $ set +x 00:25:01.521 ========== Backtrace start: ========== 00:25:01.521 00:25:01.521 in spdk/autorun.sh:27 -> main(["/home/vagrant/spdk_repo/autorun-spdk.conf"]) 00:25:01.521 ... 00:25:01.521 22 trap 'timing_finish || exit 1' EXIT 00:25:01.521 23 00:25:01.521 24 # Runs agent scripts 00:25:01.521 25 $rootdir/autobuild.sh "$conf" 00:25:01.521 26 if ((SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1)); then 00:25:01.521 => 27 sudo -E $rootdir/autotest.sh "$conf" 00:25:01.521 28 fi 00:25:01.521 ... 00:25:01.521 00:25:01.521 ========== Backtrace end ========== 00:25:01.521 01:46:32 -- common/autotest_common.sh@1198 -- $ return 0 00:25:01.521 01:46:32 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:01.521 01:46:32 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:01.521 01:46:32 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:01.521 01:46:32 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:01.521 01:46:32 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:02.100 [Pipeline] } 00:25:02.119 [Pipeline] // timeout 00:25:02.127 [Pipeline] } 00:25:02.143 [Pipeline] // stage 00:25:02.150 [Pipeline] } 00:25:02.154 ERROR: script returned exit code 1 00:25:02.154 Setting overall build result to FAILURE 00:25:02.171 [Pipeline] // catchError 00:25:02.180 [Pipeline] stage 00:25:02.182 [Pipeline] { (Stop VM) 00:25:02.196 [Pipeline] sh 00:25:02.477 + vagrant halt 00:25:05.765 ==> default: Halting domain... 00:25:12.385 [Pipeline] sh 00:25:12.664 + vagrant destroy -f 00:25:15.196 ==> default: Removing domain... 00:25:15.467 [Pipeline] sh 00:25:15.748 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:25:15.757 [Pipeline] } 00:25:15.772 [Pipeline] // stage 00:25:15.777 [Pipeline] } 00:25:15.791 [Pipeline] // dir 00:25:15.796 [Pipeline] } 00:25:15.810 [Pipeline] // wrap 00:25:15.816 [Pipeline] } 00:25:15.828 [Pipeline] // catchError 00:25:15.838 [Pipeline] stage 00:25:15.840 [Pipeline] { (Epilogue) 00:25:15.852 [Pipeline] sh 00:25:16.134 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:18.050 [Pipeline] catchError 00:25:18.052 [Pipeline] { 00:25:18.065 [Pipeline] sh 00:25:18.346 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:18.605 Artifacts sizes are good 00:25:18.614 [Pipeline] } 00:25:18.628 [Pipeline] // catchError 00:25:18.639 [Pipeline] archiveArtifacts 00:25:18.645 Archiving artifacts 00:25:18.846 [Pipeline] cleanWs 00:25:18.857 [WS-CLEANUP] Deleting project workspace... 00:25:18.858 [WS-CLEANUP] Deferred wipeout is used... 00:25:18.864 [WS-CLEANUP] done 00:25:18.865 [Pipeline] } 00:25:18.883 [Pipeline] // stage 00:25:18.890 [Pipeline] } 00:25:18.906 [Pipeline] // node 00:25:18.912 [Pipeline] End of Pipeline 00:25:18.966 Finished: FAILURE